Aether Studios’ 3D follow-up to Rivals of Aether was teased on April 1 via Twitch. (Aether Studios image)
The makers of the hit indie game Rivals of Aether announced Monday that they’ve founded a new studio in Seattle to continue work on their next game.
Aether Studios is being built around its first project, a new, still-untitled fighting game with 3D graphics that’s set in the same universe as Rivals of Aether. It’s also planned to serve as the “front-facing home” of future work in the Aether universe, which includes a forthcoming series of comic books. The Aether games currently on the market will continue, at least for now, to be published through creator Dan Fornace‘s self-titled LLC.
The original game, Rivals of Aether, debuted on Windows and Xbox One in 2017. It’s self-billed as a “platform fighter,” putting it in the same genre as Nintendo’s Super Smash Bros., and was built by and for fans of that series. The original creator of Rivals, Fornace, worked at Microsoft Studios between 2011 and 2014, where he was the lead designer on the first season of content for the 2013 revival of Killer Instinct.
Like Smash, characters win in Rivals by throwing their opponents off-screen, rather than fighting to a knockout. The cast of Rivals are anthropomorphic animals, each of whom has been imbued with particular elemental powers, as part of a larger struggle between warring civilizations on the planet Aether.
Rivals has a pro league, the Rivals Championship Series, and two spin-offs. Lovers of Aetherbegan asa 2019 April Fool’s joke, which recasts the characters as high school students in an all-ages dating simulator, and Creatures of Aether, a card game for mobile devices, launched last September.
Aether Studios has arrived! Our goal is to create the best new fighting game franchise.
With the founding of Aether Studios, this marks the official move to turn the world of Aether into a full-blown multimedia franchise, in part because many of the most popular fighting games on the market today are all drawing on universes with at least 20 years of lore behind them. There’s a lot of cultural momentum built up behind something like Street Fighter or Mortal Kombat, and Aether wants to tap into that.
“We want to build amazing fighting games,” the company wrote in its official announcement, “and we also want people to care about our roster of characters. Our other games and stories give us the chance to highlight our characters.”
Aether Studios is currently hiring staff to work on its 3D fighting game, including artists, animators, and programmers that are familiar with Unreal Engine 4.
The moniker beat out more than 1,200 submissions for names submitted by the community to become a finalist. It then received more than 30,000 votes to eclipse a few more, including Sir Digs-A-Lot, an appreciation of iconic rapper Sir Mix-a-Lot, as well as Daphne, Molly the Mole and Boris the Plunger.
Band members unveiled the specially painted machine, by artist Devin Finley, in a ceremony complete with video, below:
Mudhoney will get to work this summer as Seattle Public Utilities and the King County Wastewater Treatment Division start on a 2.7-mile long, 18-foot, 10-inch-diameter storage tunnel, running beneath several neighborhoods, as part of the Ship Canal Water Quality Project.
The ultimate goal, by 2025, is to keep more than 75 million gallons of sewage and polluted stormwater, on average each year, from entering the Lake Washington Ship Canal, Salmon Bay and Lake Union.
Time lapse of artist Devin Finley painting @mudhoney on the @SeattleSPU Ship Canal Water Quality Project's Tunnel Boring Machine. ???? It's beautiful… https://t.co/ONXa4N0bDu
For those who grew up watching the endless coverage of the Apollo program in the 60s and 70s, the sight of OV-102, better known as the Space Shuttle Columbia, perched on pad 39A at the Kennedy Space Center was somewhat disconcerting. Compared to the sleek lines of a Saturn V rocket, the spacecraft on display on April 12, 1981, seemed an ungainly beast. It looked like an airplane that had been tacked onto a grain silo, with a couple of roman candles attached to it for good measure. Everything about it seemed the opposite of what we’d come to expect from spaceflight, but as the seconds ticked away to liftoff 40 years ago this day, we still had hope that this strange contraption wouldn’t disappoint.
At first, as the main engines ignited, it seemed that Columbia would indeed disappoint. The liquid hydrogen exhaust plume seemed anemic, at least compared to the gout of incandescent kerosene that had belched out from every rocket I’d ever seen launched. But then those magnificent — and as it later turned out, deadly dangerous — solid rocket boosters came to life, and Columbia fairly leaped off the launchpad. Americans were on their way to space again after a six-year absence, and I remember cheering astronauts John Young and Bob Crippen on as I watched the coverage with my dad that early Sunday morning.
STS-1
Crippen and Young training aboard Columbia for STS-1.
The seeds for what would become the Space Transportation System (STS), which was the official name for the Space Shuttle program, were sown even before the famous flight of Apollo 11 in 1969. The incredible expense of launching an almost completely expendable rocket to get astronauts into orbit or beyond was becoming untenable, so the focus switched to building a new generation of spacecraft with reusability in mind. Dozens of ideas were floated, but eventually the rocket-boosted spaceplane concept won out and the STS program was funded by Congress in 1972.
The first flight of Columbia on that April morning, which by sheer luck coincided with the 20th anniversary of Yuri Gagarin’s ride to space aboard Vostok-1, was a record-setter in many ways. Not only was it to be the first orbital flight of a reusable spaceplane, but it was also the first time America had a crewed maiden flight. Every rocket used for crewed missions to that point had had at least one uncrewed flight. Columbia had been tested on the pad with her main engines lit, and sister ship Enterprise had done extensive unpowered drop tests for approach and landing training, but everything between the countdown clock reaching zero and the end of reentry had never been done before.
STS-1 was a brief mission filled with technical tests; it was intended to make sure the orbiter was spaceworthy and did very little if any science. Young and Crippen stayed aloft for a little more than two days before deorbiting over the Indian Ocean, beginning the unpowered, Mach 24 reentry process. Much of the early reentry maneuvers were handled automatically by Columbia’s on-board computers, but Commander Young eventually took the stick and guided the spaceplane to a smooth landing on the dry lake beds of Edwards Air Force Base in California. STS-1 was complete, and the age of the Space Shuttle had begun.
Columbia’s Legacy
As with any major system, the design of the Shuttle was a compromise, but given its high profile as a successor to Apollo and the competing factions vying for the capabilities it wanted to see in a launch system, it’s a wonder the spacecraft ever got off the ground. Along with the test article Enterprise, the five STS orbiters — Columbia, Challenger, Discovery, Atlantis, and Endeavour — have been called the most complex machines ever built by humans. The truth of that is probably open to debate, but there’s no doubt that the complexity of the orbiters was at odds with its reusability, and the desired quick turnaround times from each orbital mission were never delivered.
Still, the Shuttle fleet delivered a total of 133 successful missions, ferried 355 individuals to space, and delivered thousands of tons of payloads into orbit and beyond. The Hubble Space Telescope, both its initial delivery and later repairs, were courtesy of the Shuttle, and a great many of the modules of the ISS were delivered in the orbiter’s ample cargo bay. The interplanetary missions that started in the payload bay of orbiters — notably Magellan, Galileo, and Ulysses — are still paying dividends in terms of understanding the nature of the universe.
Still, the Space Shuttle program suffered from a pair of catastrophic losses. As much as I remember the launch of STS-1, I much more keenly remember the loss of Challenger at launch on STS-51-L in 1986, and the reentry breakup of Columbia on STS-107 in 2003. Those losses plus the failure to deliver the rapid turnaround and lower costs needed to maintain a reasonable tempo of launches were the final nails in the coffin for the STS program, which was canceled after the 2011 landing of STS-135. Still, the program had staying power, and for 30 years it was the only way for America to get payloads upstairs.
GeekWire reporter Mike Lewis at a Seattle Sounders match.
You may know him as Seattle’s Storyteller. You may know him as the owner of Streamline Tavern.
And now, you’ll also know him as the newest reporter at GeekWire.
Please join us in welcoming Mike Lewis to the GeekWire newsroom!
Mike is a longtime Seattle journalist and editor who got his start in 2000 at the Seattle Post-Intelligencer as a politics reporter. The motorcycle-riding California native became known for his “Under the Needle” column that earned him the moniker of Seattle’s Storyteller.
Now at GeekWire, Mike will focus on an array of topics, including the intersection of civic life and technology, reporting on how policy, transportation, regulation, housing, and more impact the innovation economy. He’s already bolstered our coverage of the historic unionization effort at an Amazon fulfillment center in Alabama. We can’t wait to see Mike break more stories and help serve GeekWire readers with his dogged reporting.
If you’re in Seattle and looking for the city’s best dive bar, drop by the Streamline Tavern and say hello to Mike. Here’s a little more to know about GeekWire’s newest reporter:
Three words to describe Seattle in 2001: I! Chi! Ro!
Three words to describe Seattle in 2021: Wear Your Mask.
Best place to hang out in Seattle (other than The Streamline): Judkins Park, dance skate night. Here.
Best concert you’ve been to: Public Enemy, Stockton, Calif., circa 1994
Movie you could watch 100 times: This is Spinal Tap
Coolest Seattle celebrity of all time: Bruce Lee
What you’re most looking forward to in a post-pandemic world: Sitting at the bar. Not a table. The bar.
Automated speech recognition services with improved accuracy for enterprise applications are the object of a new batch of pre-trained deep learning models and software from Nvidia aimed at interactive AI conversational services.
The Jarvis translation platform announced during this week’s Nvidia GPU Technology Conference casts a wide net across different industry and domain applications. The Jarvis models are designed to generate more accurate speech recognition along with real-time translations to five languages—with more to come—along with text-to-speech capabilities for conversational AI agents.
Nvidia (NASDAQ: NVDA) promotes Jarvis as a GPU-accelerated deep learning AI platform for speech recognition and generation, language understanding and translations. “Jarvis interacts in about 100 milliseconds,” Nvidia CEO Jensen Huang noted in his GTC21 keynote address.
The machine translator was trained with over several million GPU-hours on more than 1 billion pages of text along with 60,000 hours of speech in different languages. Huang claimed Jarvis achieved 90-percent recognition accuracy “out of the box.”
The initial output from Jarvis can be fine-tuned with internal data using Nvidia’s new model training framework dubbed TAO, which customizes pre-trained models for “domain-specific applications” across different industries.
Jarvis currently supports English translations to and from French, German, Japanese, Russian, and Spanish, with more languages coming.
Source: Nvidia Corp.
Huang noted that Jarvis can be deployed in the cloud or EGX AI edge accelerators for AI in data centers as well as edge implementations running on Nvidia new 5G application framework, EGX Aerial.
Nvidia launched an early access program for Jarvis last year. So far, the conversational AI tools have attracted more than 45,000 downloads.
Among the early adopter is T-Mobile (NASDAQ: TMUS), which is using Jarvis for real-time customer service applications.
Huang also announced a partnership with Mozilla Common Voice, a crowdsourcing project that hosts the largest open multi-lingual voice data set covering 60 different languages. Nvidia DGX processors will be used to train Jarvis in developing pre-trained models using the public domain data set. Those models will be released for free to the open source community, Huang said.
“Let’s make universal translation possible, and help people around world understand each other,” the Nvidia CEO added.
Nvidia also said new Jarvis features will be released during the second quarter as part of its ongoing beta program. The Jarvis toolkit can be downloaded now from the Nvidia NGC catalog, a hub for GPU-based deep learning, machine learning and HPC applications released in March.
Investors responded favorably to a slew of GPU-related announcements during the first day of the GTC event: Nvidia shares jumped more than 5 percentage points at the close of trading on Monday (April 12).
We don’t really think anyone in the Victorian era had a COSMAC Elf — the homebrew computer based around the RCA 1802 CPU. But if they did, it might have looked like [Daniel Ross’] steampunk recreation of the system that includes an appropriate-looking teletype device. You can see the thing in a series of videos, below. There are actually quite a few videos showing different parts of the system, along with several blog postings stretching back a few months.
A magic eye tube doesn’t look out of place in this build. We especially liked the glass tube displays and the speaker, although we thought the USS Enterprise looked out of place with the technology based on stone knives and bearskins, to paraphrase Mr. Spock. On the plus side, the VFD displays have the right glowing look, although a Nixie would have been pretty good there, too.
The videos don’t have much detail, but the blog posts do if you wanted to attempt something similar. Honestly, 1802 system design is pretty easy thanks to the its on-chip DMA that allows you to load memory from switches with no actual software like a monitor. The teletype started out life as a Remington #7 from around 1900, although another newer machine donated parts to get everything working. It is a testament to how well things were built then that it took as much abuse as it did and still has working parts.
We have a soft spot for the 1802 — it was a very good design for its time. We’ve even gone as far as to simulate it.
We will be the first to admit that it’s often hard to be productive while working from home, especially if no one’s ever really looking over your shoulder. Well, here is one creepy way to feel as though someone is keeping an eye on you, if that’s what gets you to straighten up and fly right. The Eyecam research project by [Marc Teyssier] et. al. is a realistic, motorized eyeball that includes a camera and hangs out on top of your computer monitor. It aims to spark conversation about the sensors that are all around us already in various cold and clinical forms. It’s an open source project with a paper and a repo and a how-to video in the works.
The eyebrow-raising design pulls no punches in the uncanny department: the eye behaves as you’d expect (if you could have expected this) — it blinks, looks around, and can even waggle its brow. The eyeball, brow, and eyelids are actuated by a total of six servos that are controlled by an Arduino Nano.
Inside the eyeball is a Raspberry Pi camera connected to a Raspi Zero for the web cam portion of this intriguing horror show. Keep an eye out after the break for the Eyecam infomercial.
Creepy or fascinating, it succeeds in making people think about the vast amount of sensors around us now, and what the future of them could look like. Would mimicking eye contact be an improvement over the standard black and gray oblong eye? Perhaps a pair of eyes would be less unsettling, we’re not really sure. But we are left to wonder what’s next, a microphone that looks like an ear? Probably. Will it have hair sprouting from it? Perhaps.
For as raucous as things can get in the comments section of Hackaday articles, we really love the give and take that happens there. Our readers have an astonishing breadth of backgrounds and experiences, and the fact that everyone so readily shares those experiences and the strongly held opinions that they engender is what makes this community so strong and so useful.
But with so many opinions and experiences being shared, it’s sometimes hard to cut through to the essential truth of an issue. This is particularly true where health and safety are at issue, a topic where it’s easy to get bogged down by an accumulation of anecdotes that mask the underlying biology. Case in point: I recently covered a shop-built tool cabinet build and made an off-hand remark about the inadvisability of welding zinc-plated drawer slides, having heard about the dangers of inhaling zinc fumes once upon a time. That led to a discussion in the comments section on both sides of the issue that left the risks of zinc-fume inhalation somewhat unclear.
To correct this, I decided to take a close look at the risks involved with welding and working zinc. As a welding wannabe, I’m keenly interested in anything that helps me not die in the shop, and as a biology geek, I’m also fascinated by the molecular mechanisms of diseases. I’ll explore both of these topics as we look at the dreaded “zinc fever” and how to avoid it.
Flu-Like Symptoms
One of the first things you’ll notice if you research zinc fever is how hard it is to find useful information. Googling “zinc fever” will get you a load of articles about using zinc supplements to stave off viral infections, not to mention other medically dubious uses for zinc. That’s partly thanks to living in these pandemic times, but also shows the unusually high noise floor that attends most searches for actionable medical information, as opposed to anecdotes.
Thankfully, though, I was able to dig deep enough to discover that what’s called zinc fever is an actual illness that has been well-described in the medical literature since the mid-1800s. It goes by a wide range of names, from the wonderfully medieval-sounding “brass founder’s ague” to “the galvie flu”, all of which reflect the fact that this is largely an occupational hazard of the metalworking trades. The illnesses all fall under the broad category of “metal fume fever” or MFF.
The metal most strongly associated with MFF is zinc, either alone or in alloy with other metals — hence the association with brass, an alloy mainly composed of copper and zinc. Other metals that can cause the illness pretty much run the gamut of commonly worked metals; the most common culprits after zinc are chromium, cadmium, and copper.
Metal fume fever typically presents as a sudden onset of classic flu-like symptoms — fever, headache, muscle and joint aches, fatigue, nausea, and violent chills. Symptoms usually begin within a few hours of exposure to metal fumes, either via welding, grinding, or foundry operations. Diagnosis is typically made based on the history, as opposed to any blood tests or other diagnostics; basically, someone who presents to an emergency room with flu-like symptoms who reports welding within the last day or so will get a presumptive diagnosis of MFF, after ruling out other possible causes.
In almost every case study and review on MFF that I could find, the course of the illness was characterized as “self-limiting”. This is medical shorthand for “it’ll go away in a couple of days,” and indeed, for most metalworkers that’s clearly the case. While some people who’ve gotten MFF report a week or so to get back to feeling normal, most are up and around again after just a few days of feeling really, really crappy.
Most, but not all: take the extreme case of Jim “Paw-Paw” Wilson, a blacksmith of some renown in the smithing community. Back in 2005, when Jim was 65, he was building a stock rack from surplus galvanized pipe. Knowing the dangers of zinc fumes, Jim attempted to burn the coating off some pipe fittings in a gas-fueled forge. He apparently charged the forge with too many fittings at once, which filled the shop with billows of thick, white zinc oxide smoke. The smoke was so thick that it left deposits of zinc oxide 1/16″ (1.5 mm) thick on the inside of the forge.
As he probably had multiple times in his metalworking career, Jim took ill with the classic symptoms of MFF shortly after that forge session. He felt well enough within a few days to take a trip, but a week after the exposure he came down with bilateral pneumonia, which killed him the next week. While it’s true that Jim suffered from emphysema before the forge incident, and that probably contributed to the outcome, the fact remains that he likely would not have gotten the pneumonia that killed him had he not tried to burn off those fittings.
Although Jim’s case was an extreme one, both in terms of the amount of zinc oxide fumes produced and the victim’s underlying medical issues, it does illustrate the point that MFF can be dangerous under the right conditions. However, the risk of dying from MFF seems to be quite low. I couldn’t find much information about the epidemiology of the illness except that there are an estimated 1,500 to 2,500 cases per year in the United States, about 700 of which were reported to poison control and a third of which required medical treatment1. It’s not clear from this review whether any of these cases resulted in death, but it’s probably safe to assume that the authors would have mentioned any deaths that had occurred.
Speaking of poison control, an interesting aspect of MFF was revealed by a 2012 review of data from poison control in Victoria, Australia2. They plotted the number of calls to poison control against the day of the week that the incident occurred, and found that Monday was by far the most likely time for someone to come down with MFF. This goes along with one of the alternate names for MFF, “Monday morning fever”, and may have to do with a certain degree of tolerance that the body builds up with extended exposure to small amounts of metal fumes. The thinking is that after a weekend away from the shop, the body’s ability to deal with the zinc toxin has decreased, making it more likely to cause symptoms after a weekend away from the shop.
This is all well and good, but what about the meat of the problem: how do metal fumes cause flu-like symptoms? Put simply, we just don’t know. The mechanism doesn’t appear to be well studied, possibly due to the fact that the illness is generally self-limiting and non-fatal. But it’s likely that what causes the symptoms experienced during a legitimate case of the flu — or, as we’ve learned the last year, a coronavirus such as SARS-CoV-2 — also causes the symptoms of MFF. So the blame falls on the human immune system, with activation of white blood cells called neutrophils; the release of cytokines, signaling chemicals related to inflammation responses; and formation of oxygen radicals. These form the biochemical brew that makes you feel so bad during the flu, and it’s thought that zinc oxide and the other metal vapors associated with MFF somehow trigger their release too.
A plant metallothionein, which is similar to mammalian MTs. The sulfur-rich cysteine residues (yellow) form coordination centers that bind to metallic ions (purple) and scavenge them from cells. Source: Thomas Shafee, CC BY 4.0
Another clue as to how MFF happens is revealed by looking at that “Monday morning fever” aspect of the illness3. The ability to develop tolerance to metal fumes over time is thought to be related to the expression of metallothioneins (MTs), which are sulfur-rich proteins that are specialized for binding metal ions in the body. A single human MT molecule can scavenge up to seven zinc ions, sequestering them and preventing them from doing whatever they do to activate the immune system. Small amounts of metal ions are thought to stimulate MT expression, which tracks with building up a tolerance over the workweek. In the absence of stimulus, though, like over a weekend away from the shop, expression of MTs is down-regulated, meaning the hapless welder who gets a big dose of zinc on Monday likely has a reduced ability to deal with the threat.
And because someone is sure to mention it in the comments, we’ll point out that old-school welders swear by the drinking of copious quantities of milk before welding anything with zinc in it to stave off the symptoms of MFF. There are plenty of anecdotes out there about how well this works, and there’s speculation that the calcium in the milk somehow blocks or competes with the zinc ions. But given that most recommendations are for drinking four or more liters of milk, and that it has to be done before welding starts, it’s probably not going to be practical for most people as a prophylactic method.
So, what’s the take-home message on metal fume fever? I think, first and foremost, that welders need to realize that it’s a real illness and not just some old wives tale. From all accounts, the illness is self-limiting and temporary in nature, but unless you have underlying medical conditions, it doesn’t seem likely to kill you. Given how debilitating flu-like symptoms can be, though, I’m not sure why anyone would even flirt with something that will make you feel like that, even if only for a couple of days. If I absolutely had to weld something galvanized, I’d make sure to do it with some sort of positive-pressure respirator, with fume extraction, or even outdoors to keep those noxious fumes away. Better to be overly cautious than to be laid up for a couple of days with symptoms that could easily be confused for something else, especially in this day and age.
Microsoft’s giant purchase: The tech giant announced its second-largest acquisition ever on Monday: a pending $19.7 billion deal to swoop up Boston-area company Nuance Communications, a longtime leader in artificial intelligence-fueled speech technology.
Why Nuance: The deal reflects Microsoft’s continued bet on healthcare. Nuance specializes in “conversational AI” for applications in hospitals and doctor’s offices; 77% of U.S. hospitals are Nuance customers.
“Nuance has huge reach among doctors and other health professionals, who will spend multiple hours on it per day,” said Chrissy Farr, a health tech investor at OMERS Ventures. “It’s also deeply integrated with the largest health IT players, including the electronic medical record companies.”
Microsoft has teamed up with Nuance in the past on healthcare-related deals to automatically create documents in the electronic health record (EHR) following a healthcare visit. Nuance already uses Microsoft’s Azure cloud computing platform, and Nuance’s ambient technology software DAX is integrated with Microsoft Teams.
“This acquisition brings our technology directly into the physician-patient loop, which is central to all healthcare delivery,” Microsoft CEO Satya Nadella said on a call with analysts Monday.
More from Nadella: The CEO also noted the pandemic-driven digital acceleration transformation “driven by industry-specific cloud solutions.”
“It’s now very clear that healthcare organizations that accelerate their digital investments can improve patient outcomes and reduce costs at scale,” he said. “Advances such as AI will have an enormous impact on augmenting human capability in healthcare. AI is technology’s most important priority, and healthcare is its most urgent application.”
Nadella said the Nuance deal will increase the company’s total addressable market in healthcare to nearly $500 billion. He also noted applications for Nuance’s tech beyond healthcare such as enterprise AI and biometric security.
“A perfect fit”: That’s how Xealth CEO Mike McSherry described the deal for Microsoft. McSherry leads Seattle healthtech startup Xealth and sold Swype to Nuance for $100 million in 2011. He said tech giants including Microsoft, Amazon, and Google have dabbled on the edges of healthcare with horizontal solutions but haven’t earned meaningful revenue.
“Healthcare usually requires verticalized solutions given all the regulatory, workflow, and privacy-related requirements to meet the needs and find customer traction,” said McSherry, nominated for CEO of the Year at the GeekWire Awards. “This could be a longer term attempt for Microsoft to reinvent EHR starting with a voice-first approach.”
“A trophy for Redmond”: That’s the reaction from Dan Ives, an analyst with Wedbush Securities, noting that the deal represents a “unique asset” for Microsoft.
“The Nuance deal is a strategic no-brainer in our opinion for Microsoft and fits like a glove into its healthcare endeavors at a time in which hospitals and doctors are embracing next generation AI capabilities from thought leaders such as Nuance,” Ives wrote in a research note.
Ives expects no major regulatory hurdles for Microsoft. He added that the company is on the “M&A warpath over the next 12-to-18 months,” citing recent reports of Microsoft’s interest in buying Discord and its $7.5 billion acquisition of ZeniMax.
Cloud business boost: The deal should add even more growth to Microsoft’s cloud arm. The company’s revenue climbed 17% to more than $43 billion for the December quarter, and profits rose 33% to $15.5 billion, eclipsing Wall Street’s expectations amid growing demand for its cloud services.
Among the company’s three main business divisions, the biggest revenue increase came in the Intelligent Cloud segment, up 23% from the prior year to $14.6 billion. Microsoft expects Nuance’s financials to be reported as part of the Intelligent Cloud segment.
Strong Seattle ties: Nuance already has a large presence in the Seattle region near Microsoft’s HQ as a result of several acquisitions including VoiceBox, Swype, Tweddle, Varolii, and Jott. Nuance in February acquired Saykara, a Seattle health-tech startup that makes a voice assistant for clinicians.
John Pollard, a Seattle tech vet and co-founder of Jott, said he was “psyched” for Nuance. He noted how the company has been shedding other business lines such as mobile and automotive — “and it looks like it really paid off,” Pollard said.
That electrical meter on the side of your house might not look like it, but it’s pretty packed with technology. What was once a simple electromechanical device that a human would have to read in person is now a node on a far-flung network. Not only does your meter total up the amount of electricity you use, but it also talks to other meters in the neighborhood, sending data skipping across town to routers that you might never have noticed as it makes its way back to the utility. And the smartest of smart meters not only know how much electricity you’re using, but they can also tease information about which appliances are being used simply by monitoring patterns of usage.
While all this sounds great for utility companies, what does it mean for the customers? What are the implications of having a network of smart meters all talking to each other wirelessly? Are these devices vulnerable to attack? Have they been engineered to be as difficult to exploit as something should be when it’s designed to be in service for 15 years or more?
These questions and more burn within [Hash], a hardware hacker and security researcher who runs the RECESSIM reverse-engineering wiki. He’s been inside a smart meter or two and has shared a lot of what he has learned on the wiki and with some in-depth YouTube videos. He’ll stop by the Hack Chat to discuss what he’s learned about the internals of smart meters, how they work, and where they may be vulnerable to attack.
Click that speech bubble to the right, and you’ll be taken directly to the Hack Chat group on Hackaday.io. You don’t have to wait until Wednesday; join whenever you want and you can see what the community is talking about.
Glow launched in June 2019 with a way to help podcasters make money through membership programs. Publicly traded Libsyn will utilize Glow’s technology, including private feed distribution and subscription billing, for its more than 75,000 podcasts.
Glow raised $2.3 million from investors, including Greycroft. Terms of the acquisition were not released. Update: An SEC filing reveals that Libsyn agreed to pay $1.2 million as part of the deal, which includes $800,000 upfront and up to $400,000 over time.
CEO and co-founder Amira Valliani told GeekWire Monday that she started Glow because she “believed in a well-funded, thriving media” and she wanted to build new business models to create that ecosystem.
“Most people looked at me like I was crazy when I said that I was making it easy for podcasters to charge for content. No one thought that people would actually pay for podcasts on a large scale,” Valliani said. “I’m proud of this acquisition because it’s a demonstration that things have changed.
“Listener-supported podcasts are now mainstream, as is consumer-funded media,” she added. “I think that’s a great thing for podcasters, for content creators, and for society.”
Ben Gilbert, managing director of Pioneer Square Labs Ventures, co-hosts “Acquired,” his own popular podcast using Glow’s technology, and says PSL is excited about the acquisition.
“We think it’ll be a great thing for all of Glow’s customers to have the support of a larger company in the podcasting ecosystem,” Gilbert said. “Amira and her team built a fantastic product, and we feel fortunate to have worked with her on Glow as a spinout of the PSL Studio.”
Other PSL spinouts that were acquired include TraceMe, which was bought by Nike in 2019. The social media-style product was co-founded by Seattle Seahawks quarterback Russell Wilson.
Glow has four full-time employees and seven total.
In an email to podcast fans, Valliani said that she expects the transition to be seamless and that subscriptions and the ability to receive podcast content shouldn’t change at all. Podcast memberships will be managed by Libsyn under the Glow branding.
The podcast industry has seen a flurry or consolidation in recent months as U.S. advertising revenue nears $1 billion, according to a July 2020 report from the Interactive Advertising Bureau and PwC. Axios also reported on the heating up of the podcast wars last fall and deals involving such heavyweights as Spotify, iHeartMedia, Apple, SiriusXM
Axios also noted new deals such as Vox Media agreeing to buy Cafe Studios, and Spotify acquiring the parent company of sports-centric social audio app Locker Room.
The GeekWire Awards, presented by Wave Business, return in a virtual format on May 20.
The GeekWire Awards return on May 20, but before we get there we could use a little help. Today, we’re announcing the finalists across 13 voting categories — everything from Startup of the Year to Next Tech Titan to Deal of the Year.
We’ve also added two awesome new categories: Workplace of the Year and STEM Educator of the Year, a non-voting category that celebrates three amazing educators from the Pacific Northwest.
Who’ll win the robot at the GeekWire Awards?
We received more than 150 nominations this year, and our judging panel helped us whittle the pool to five deserving candidates in each category. Now, starting today, we’re opening up the voting to the larger GeekWire community.
The winners will be announced live from the virtual stage on May 20. To tune in to the celebration — including surprise cameos — make sure to register for free here.
Already know who gets your vote? Make your picks here or in the embedded ballot below. We’ll also feature the finalists over the next few weeks on GeekWire, with editorial posts on each category.
Community votes will be factored in with votes from our judges and GeekWire members, and then on May 20 we will announce the winners in a live virtual event hosted from Seattle’s Pacific Science Center.
Now in its 13th year, the GeekWire Awards are one of the most hotly-anticipated events in the Seattle tech community, bringing together more than 1,000 geeks to celebrate innovation and the entrepreneurial spirit. Past winners have included Tableau, Smartsheet, Swype, Redfin, Convoy, Zulily, Avalara CEO Scott McFarlane, University of Washington computer scientist Ed Lazowska, DreamBox Learning CEO Jessie Woolley-Wilson, TAF and many others.
A big thanks to our longtime presenting sponsor on the GeekWire Awards: Wave Business.
As those of us with an interest in space exploration look forward with excitement towards new Lunar and Martian exploration, it’s worth casting our minds back for a moment because today marks a special anniversary. Sixty years ago on April 12th 1961, the Vostok 1 craft with its pilot Yuri Gagarin was launched from the Baikonur cosmodrome in what is now Kazakhstan. During the 108-minute mission he successfully completed an orbit of the Earth before parachuting from his craft after re-entry and landing on a farm near Engels, in the Saratov oblast to the south of Moscow.
Yuri Gagarin
In doing so he became the first human in space as well as the first to orbit the Earth, he became a hero to the Soviet and Russian people as well as the rest of the world, and scored a major victory for the Soviet space programme by beating the Americans to the prize. All the astronauts and cosmonauts who have been to space since then stand upon the shoulders of those first corps of pioneering pilots who left the atmosphere alone in their capsules, but it is Gagarin’s name that stands tallest among them.
We consider that the politics of the Cold War should not be allowed to detract on our side of the world from the achievement of Gagarin and the engineers and scientists who placed him in orbit, thus we prefer to tell the whole story when dealing with space history. If you’d like to read a bit more Vostok history then we’d like to point you at the story of another Soviet cosmonaut, Valentina Tereshkova, the first woman in space.
In this monthly feature, we’ll keep you up-to-date on the latest career developments for individuals in the big data community. Whether it’s a promotion, new company hire, or even an accolade, we’ve got the details. Check in each month for an updated list and you may even come across someone you know, or better yet, yourself!
Pascal Bornet
Aera Technology, a cognitive automation company, appointed Pascal Bornet as its chief data officer. Bornet brings over 20 years of experience, including founding the artificial intelligence and automation practices for McKinsey & Company and EY, where he drove the transformation of global enterprises across industries.
“I am excited to join the Aera Technology team,” said Bornet. “I believe in the combination of AI and Automation technologies to automate the most complex business use cases. This crossroad of domains perfectly describes Aera’s unique position in the market. It is about leveraging data, machine learning and process automation to generate the highest value for companies.”
Tim Cabral
SingleStore, the database of now for cloud-native modern applications, welcomed former chief financial officer Tim Cabral to its board of directors. At Veeva Systems, Cabral played a key role in the company’s expansion efforts. Cabral currently serves as the audit committee chair for Doximity and ServiceTitan.
“SingleStore has delivered amazing results in the past year amid an extremely challenging environment,” said Cabral. “I look forward to being involved with this database innovator and bringing the operational experience I have gained at both private and large public companies to SingleStore as the company positions for greater scale and success.”
Grant Crow, Tacis Gavoyannis, Brian Boyer, and Henk Jansen
Massive Analytic, a precognition AI company, welcomed Dr. Grant Crow, Tacis Gavoyannis, Brian Boyer, and Henk Jansen to its leadership team. Crow joined as Massive Analytics’ chief operating officer. She comes from HUMN.ai, where she held the role of chief operating officer.
Gavoyannis joined as its chief growth officer, where he will focus on the expansion of Massive Analytics’ customer base. Boyer joined as the company’s senior vice president of commercial operations, bringing over 25 years of experience. Lastly, Jansen joined Massive Analytics as its director of research and innovation, with a mission to develop new leading-edge technologies in autonomous control and quantum computing.
Peter Jack
Peter Jack joined Exasol, the high-performance analytics database vendor, as its chief data and analytics officer. Jack has held several leadership positions supporting companies through digital transformation. He will help Exasol customers make the most of its high-performance in-memory analytics database to accelerate their journey.
“I’m delighted to have joined Exasol. I’ve known the company for several years and its trajectory has been very impressive. I’m looking forward to helping Exasol get to the next level in its growth journey,” said Peter. “The opportunity in the data analytics market is enormous, and I am excited about supporting Exasol’s mission to accelerate insights from the world’s data.”
Shane James
MANTA, a data lineage platform provider, appointed Shane James as its senior vice president of customer success. James comes from Tanium, where he served as its director of customer success. Before Tanium, he spent over 16 years at Oracle, where he served as its director of North America, pre and post-sales engineering, and customer success.
“Similar to MANTA’s focus on enabling businesses to take advantage of their data, my goal is to ensure that our customers are getting the most out of their investments in our technology,” James said. “My focus will be on ensuring MANTA is a true business partner for our customers, which means understanding their unique business goals and figuring out how we can best support them as they work to achieve these goals.”
Paul Lewis
Pythian Services Inc., a data, analytics and cloud services company, appointed Paul Lewis as its chief technology officer. As CTO, Lewis will drive Pythian’s technology strategy, helping customers leverage and scale their data and cloud assets to deliver valuable business outcomes throughout their digital transformation journey.
“I’m thrilled to join Pythian’s esteemed leadership team,” said Lewis. “I look forward to championing the company’s relentless focus on data and the cloud to inform and elevate organizations across the globe.”
Todd Levy
Netlist, Inc., a high-performance SSD and modular memory subsystems vendor, appointed Todd Levy as its vice president of sales. Levy brings over 25 years of experience working in the memory and storage market.
Before Levy joined Netlist, he held several positions at SMART Modular Technologies, including senior director of worldwide sales and senior director of strategic accounts. Levy previously worked at Netlist in the early 2000s as a global account manager for Dell.
Robin Matlock and Anita Pandey
Dremio, a data lake service provider, welcomed Robin Matlock to its board of directors and Anita Pandey as its chief marketing officer. Matlock brings over 30 years of experience in marketing, sales and business development in the enterprise software and services market. In addition to Dremio, she is an independent director on the boards of Iron Mountain, Cohesity, and People.ai.
Pandey brings more than 20 years of technology leadership experience to Dremio, most recently as CMO of Cisco’s cloud security business unit, where she led marketing, inside sales and go-to-market strategy. She also served as vice president of marketing and cloud migration at Velostrata (acquired by Google in 2018).
Jeff Moyer
Jeff Moyer joined Luminoso, the company that turns unstructured text data into business-critical insights, as its president and chief executive officer. Before Moyer joined Luminoso, he held the role of senior vice president of private cloud and managed public cloud for Rackspace, a multi-cloud technology services company.
“Businesses are leaving money on the table if they don’t have a fast, accurate way of extracting sentiment and insights from the text data they receive through reviews, surveys, chatbot conversations, call center transcripts and many other sources,” said Moyer. “The Luminoso team has done an extraordinary job of both building text analytics solutions that are truly differentiated in the market, and helping some of the world’s most well-known brands use its applications to turn insights into revenue. As Luminoso enters its second decade, I look forward to leading the company through a period of exponential expansion.”
Steve Neat
Alation Inc., an enterprise data intelligence solutions provider, welcomed former executive at Collibra, Steve Neat, as its new vice president of sales for Europe, the Middle East, and Africa. Neat will be responsible for addressing and the region’s demand and develop new sales channels.
“Alation is well-positioned within the enterprise data intelligence market, as organizations of all sizes race to create a data culture and become more data-driven,” said Neat. “I’m excited to join Alation at this pivotal stage of growth, further expand our sales coverage beyond the U.S., and build out a local team to better serve our customers and help them succeed on their data intelligence journeys.”
Kate Reed
Kate Reed joined Syniti, an enterprise data management soultion provider, as its chief marketing officer. Former CME of IBM Security Reed brings over 15 years of experience building and growing customer acquisition programs. She will oversee Syniti’s worldwide marketing strategy and execution.
“What drew me to Syniti was its customer-first mentality and software-led services approach that’s unmatched in the Enterprise Data Management space,” said Reed. “Syniti’s complex data expertise gives enterprises unique tools to gain competitive advantage and grow faster, and I’m pleased to be a part of this exciting next chapter with our team, partners and customers.”
Jonathan Reid
Yellowbrick Data welcomed Jonathan Reid as its chief revenue officer. Reid brings more than 25 years of experience in driving growth strategies for start-up companies. AS CRO, He will be responsible for accelerating the adoption of Yellowbrick Data Warehouse within enterprises and commercial companies and building strong partnerships.
“The growth potential for Yellowbrick is absolutely staggering,” said Reid. “Companies are demanding real-time analytics as part of a digital transformation that’s been accelerated by the pandemic and the associated unprecedented growth in cloud and virtual business. Yellowbrick is uniquely poised to help companies transform through extraordinarily fast insights, with a price performance level unheard of in the industry.”
Paul Repice
Datadobi, an unstructured data management software vendor, appointed Paul Repice as its vice president of sales for the Americas. Repice will be responsible for creating and maintaining the company’s sales strategies. He comes from Tintri, where he served as the vice president of sales for the Americas.
“Organizations have been struggling to manage the growth of their unstructured data, both on-premises and in the cloud, and to use it to their advantage,” said Repice. “Datadobi’s stellar reputation attracted me to the company, and it is my privilege to join the team. I look forward to working with the Datadobi team to be able to extend the reach of our best-in-class solutions to enterprises around the world so that teams can do more with their petabytes of data and billions of files.”
Ajay Sabhlok
Rubrik, a cloud data management company, appointed Ajay Sabhlok as its chief information officer and chief data officer. Sabhlok brings more than 10 years of experience in the technology industry and will oversee the IT, data and advanced analytics strategies for Rubrik. Sabhlok, who joined the company in 2018, was promoted from his role as vice president and head of IT enterprise business applications.
“Today, more than at any time in history, leading businesses are hungry to create meaningful business value from their data,” said Sabhlok. “This new role presents the perfect opportunity for Rubrik to continue to deliver on its promise to enable enterprises to maximize value from their data in increasingly complex and fragmented environments across data centers and clouds. I’m thrilled to lead our IT and analytics strategies for Rubrik and unlock new insights and possibilities for our valued customers.”
Cory Scott and Melanie Vinson
Confluent, Inc., provider of the platform to set data in motion, appointed Cory Scott and Melanie Vinson as its chief information security officer and chief legal officer, respectively. Scott comes from Google, where he served as head of security and product privacy for the company’s devices and services division. Before Google, Cory was CISO at LinkedIn, where he advocated the use of Apache Kafka for security telemetry in monitoring and incident response.
Vinson comes from Adaptive Insights, a financial planning cloud company, where she served as general counsel and board secretary. She was responsible for building the legal team to support corporate governance, scalability, and privacy and compliance initiatives.
To read last month’s edition of Career Notes, click here.
Do you know someone that should be included in next month’s list? If so, send us an email at mariana@taborcommunications.com. We look forward to hearing from you.
We’re accustomed to seeing giant LED-powered screens in sports venues and outdoor displays. What would it take to bring this same technology into your living room? Very, very tiny LEDs. MicroLEDs.
MicroLED screens have been rumored to be around the corner for almost a decade now, which means that the time is almost right for them to actually become a reality. And certainly display technology has come a long way from the early cathode-ray tube (CRT) technology that powered the television and the home computer revolution. In the late 1990s, liquid-crystal display (LCD) technology became a feasible replacement for CRTs, offering a thin, distortion-free image with pixel-perfect image reproduction. LCDs also allowed for displays to be put in many new places, in addition to finally having that wall-mounted television.
Since that time, LCD’s flaws have become a sticking point compared to CRTs. The nice features of CRTs such as very fast response time, deep blacks and zero color shift, no matter the angle, have led to a wide variety of LCD technologies to recapture some of those features. Plasma displays seemed promising for big screens for a while, but organic light-emitting diodes (OLEDs) have taken over and still-in-development technologies like SED and FED off the table.
While OLED is very good in terms of image quality, its flaws including burn-in and uneven wear of the different organic dyes responsible for the colors. MicroLEDs hope to capitalize on OLED’s weaknesses by bringing brighter screens with no burn-in using inorganic LED technology, just very, very small.
So what does it take to scale a standard semiconductor LED down to the size of a pixel, and when can one expect to buy MicroLED displays? Let’s take a look.
All About the Photons
Schematic view of a color CRT: three electron guns along with the deflection coils to target the electrons onto the phosphor layer.
The most important property of a display is of course the ability to generate a sufficient number of photons to create a clear image. In the case of CRTs, this is done by accelerating electrons and smashing them into a phosphor layer. Each impact results in a change in the energy state of the phosphor molecule, which ultimately leads to the added energy being emitted again in the form of a photon. Depending on the phosphor used, the photon’s wavelength will differ, and presto, one has a display.
The reason why CRTs are rather bulky is because they use one electron gun per color. While this is fairly efficient, and the use of electromagnetic controls make for impressively fast scan rates, it does give CRTs a certain depth that is a function of display dimension. An interesting improvement on these classical CRTs came from Canon and Sony in the form of SED and FED, respectively during the early 2000s. These display technologies used semiconductor technology to create a single electron gun per pixel, which would fire at the phosphor layer, mere millimeters away.
By that time LCD technology was already beginning to become firmly established, however. Unlike like the somewhat similar plasma display technology, SED and FED never made it into mass production. Even then, LCD technology itself was going through some big growing spurts, trying to outgrow its early days of passive matrix LCDs with slow response times, massive ghosting and very narrow viewing angles using primitive twisted nematics (TN) panels.
Even though LCDs were clearly inferior to CRTs during the 1990s and into the early 2000s, what LCDs did have, however, was thinness. Thin enough to be put into mobile devices, like laptops and the ‘smart assistants’ of the time, such as personal digital assistants (PDAs). As LCDs gained features like active matrix technology which removed most ghosting, and new liquid crystal alignments (e.g. IPS, MVA) that improved viewing angles, so too did their popularity grow. Clearly, bulky displays were to be a thing of the past.
The Curse of the Backlight
Schematic overview of a twisted nematic (TN) LC display, showing the OFF and ON state, respectively.
An LCD has a number of layers that make it work. There is the liquid crystal layer that can block or let light through, there are also the color filters that give pixels their colors, and the TFT control and polarization layers. Most LCDs use a backlight source that provides the photons that ultimately reach our eyes. Because of all these layers in between the backlight and our Mark I eye balls, quite a lot of energy never makes it out of the display stack.
In the case of a ‘black’ pixel, the intention is to block 100% of the backlight’s energy in that section using the LC layer. This is both wasteful, and since the crystals in the LC layer do not fully block the light, LCDs are incapable of producing pure blacks. While some LCD technologies (e.g. MVA) provide a much better result here, this comes at compromises elsewhere, such as reduced response time.
This illustrates the most fundamental difference between a CRT display and an LC display: a CRT is fundamentally dark where the electrons don’t hit. SEDs, FEDs and plasma displays are also self-illuminating, as is OLED. This is a crucial factor when it comes to high dynamic range content.
With the move to LED-based backlights for LCDs, the situation has improved somewhat because an LCD can have different backlight sections that can activate separately. By using more, smaller LEDs in the backlight the number of so-called dimming zones can be increased, making darker blacks. You can see where this is going, right?
The Future is Self-Illuminating
After decades of display technology evolution, the factors which determine a display technology’s popularity essentially come down to four factors:
How cheaply it can be produced.
How well it reproduces colors.
How well does it scale.
How many use cases does it cover.
In the case of LCDs over CRTs it was clear why the latter couldn’t compete, and why plasma screens never made a big splash. It also makes it clear that – as demonstrated by e.g. Samsung exiting the LCD market – LCDs have hit somewhat of a dead end.
This is how Samsung apparently envisions MicroLED TVs will be used. Good thing that they have very high brightness levels. (Credit: Samsung)
MicroLEDs were invented over twenty years ago, and while e.g. Samsung’s The Wall is seeing limited commercial use, the most exciting development will probably come this year, with MicroLED TVs that fall into the ‘affordable’ range appearing, assuming one’s target is a 76″ MicroLED TV for roughly what an early plasma display would have cost.
Smaller MicroLED displays are unlikely to appear for a while. Immature manufacturing technologies and the need to reduce pixel pitch even more are the bottlenecks at the moment. This latter point is quickly seen in the specifications for Samsung’s MicroLED TVs to be released this year: they only support 4K, even in the 110″ version. At 50″, 1080p (‘fullHD’) would be about the most one could hope for without sending production costs skyrocketing.
A Matter of Time
As cool as new technologies can be, one cannot expect them to fall off the production line one day, all perfect and ready to be used. Early CRTs and passive matrix LCDs were terrible in their own unique way . As the technology matured, however, CRTs became reliable workhorses at very affordable prices, and LCDs became pretty tolerable.
OLED technology started off with an optimistic ~1,000 hour lifespan on the early Sony prototypes, but today we see (AM)OLED displays everywhere, from cellphones to TVs and even as tiny monochrome or multi-color screens for embedded applications. With MicroLED having the benefit of being based on well-known semiconductor technologies, there’s little reason to doubt that it’ll undergo a similar evolution.
As MicroLED features higher brightness and longer lifespan that OLED, with lower latency, higher contrast ratio, greater color saturation, and a simplified display stack compared to LCDs, it’s little wonder that MicroLED displays are being produced by not only Samsung, but also by Sony (‘Crystal LED’) and AU Optronics, amidst a smattering of other display manufacturers, and tantalizing promises of small (<5″) MicroLED displays.
We know that everyone likes lots of tiny LEDs. Will your love last, once they become this commonplace?
The 2020 coronavirus pandemic upended the way companies do business. Some are coping better than others—but largely, businesses are optimistic about 2021.
That’s especially so for tech-forward organizations in two different industries—technology and manufacturing— that are planning major business initiatives to move beyond crisis response and thrive in a transformed corporate landscape. The pandemic accelerated trends that already were underway—and while 2020 might have been spent coping with the crisis, many business leaders are thinking about the next steps.
“We are in the middle of probably one of the biggest strategic moves the company has made in its history,” says Ritu Raj, director for enterprise engineering at John Deere. “That’s a big statement for a company that’s over 180 years old.”
According to a worldwide survey of 297 executives, conducted by MIT Technology Review Insights, in association with Oracle, 80% feel upbeat about their organizations’ ultimate goals for 2021, expecting to thrive—for example, sell more products and services—or transform—change business models, sales methodology, or otherwise do things differently.
The iconic manufacturer of agricultural and construction equipment is building a new operating model for the company with technology as the centerpiece, Raj says. For example, the tractors it’s selling today collect data about their operations and help farmers complete jobs like planting with precision. It’s one of the big moves— new business models, mergers and acquisitions, and big technology changes such as widespread automation— that organizations are making or planning in a landscape transformed by the pandemic.
A tale of two industries
Every industry has unique characteristics. Certainly that’s true of technology companies, which by their nature undergo rapid transformation. The industry tends to be early adopters of new technology, says Mike Saslavsky, senior director of high-tech industry strategy at Oracle. Most tech products have rapid, short lifecycles: “You have to stay up with the next generation of technology,” he adds. “If you’re not transforming and evolving your business, then you’re probably going to be out of the market.” That premise applies across the range of businesses categorized as “tech,” from chip manufacturers to consumer devices to office equipment such as copiers.
Manufacturing has traditionally maintained a more complicated relationship with technology. On the one hand, the industry is trying to be resilient and flexible in a volatile present, says John Barcus, group vice president of Oracle’s industry strategy group. Geopolitical issues like protectionism make it harder to get the right materials delivered for products, and the lockdowns imposed during the pandemic have caused further supply chain issues. That has led manufacturers to greater adoption of cloud technologies to connect partners, track goods, and streamline processes.
On the other hand, the industry has a reputation for short-term thinking—“If it works OK today, I can wait until tomorrow to fix it,” says Barcus. That shortsightedness is caused, often understandably, by cash-flow problems and risk associated with tech investment. “And then, all of a sudden something new hits that they weren’t prepared for and they have to react.”
There are shining examples of what manufacturers could be doing. For instance, global auto parts maker Aptiv spun off its powertrain business in 2017 to focus on high-growth areas such as advanced safety technology, connected services, and autonomous driving, says David Liu, who was until January 2020 director of corporate strategy. (He’s now director of corporate development at General Motors.) In 2019, Aptiv formed Motional, a $4 billion autonomous driving joint venture with Hyundai to accelerate the development and commercialization of autonomous vehicles. The pandemic forced the company to have both the financial discipline to withstand an unpredictable “black swan” event and the imagination and drive to do big things, Liu says. In June 2020, for example, the company made a $4 billion equity issuance to shore up its future growth through investments and possible acquisitions. “The key for us is to balance operational focus and long-term strategic thinking.”
The drive behind the plans
Among all survey respondents, the most common planned big moves are substantially increased technology investments (60%) and cloud migrations (46%), with more than a third acting on business-merger plans.
In the technology and manufacturing industries, there’s more commitment to digitize business, and the organizations that did so before the pandemic were better prepared to cope. For instance, they had the technology in place to allow their workforces to work from home, Barcus points out. In fact, the crisis accelerated those efforts. Whatever their progress, he says, “Many of them, if not most of them, are now looking at, ‘How do I prepare and thrive in this new environment?’”
Now is a tough time to be a retailer. Even before the 2020 coronavirus pandemic brought rapid changes to the market, many traditional brick-and-mortar businesses were struggling. For example, from 2011 to 2020, the number of US department stores shrank from 8,600 to just over 6,000.
The global crisis only amplified retail challenges. Since March 2020, at least 347 US companies cited the pandemic as a factor in their decisions to file for bankruptcy. Among them was Guitar Center, whose executives said its e-commerce sales couldn’t replace the experience of musicians trying out instruments in person. Some businesses are finding new ways to cope— or perhaps come out of the crisis in better shape than when it began. In 2021, it appears many retailers are ready to shift the way they do business.
MIT Technology Review Insights, in association with Oracle, surveyed 297 executives, primarily financial officers, C-suite, and information technology leaders, about their organizations’ plans for big business moves. These include new business models, mergers and acquisitions, and major technology changes, such as automating financial and risk management processes.
According to the research, 83% of executives across industries feel upbeat about their company’s ultimate objective for 2021, expecting to thrive or transform— that is, sell more products and services, or take up new business practices or sales methodologies. Overall, 80% of organizations made a big move in 2020 or are planning at least one in 2021.
The road ahead for retail
The shopping process will be different in 2021, says Mike Robinson, head of retail operations at The Eighth Notch, a tech platform that connects shippers and retailers, and former digital business leader at Macy’s. Among the hard-to-answer questions retailers are asking: “How can stores reassure people that it’s safe to return to congregating in places again? How can consumers trust that the store is doing the right thing from a cleanliness perspective?” Nobody has definitive answers, Robinson points out, but at least they’re asking.
Other special areas of concern for retail organizations in 2021: consumer and e-commerce cybersecurity risks. As cyberattacks get bolder and more frequent, retailers have to contemplate how to protect their data, starting with preventing credit card fraud. While that matters to any consumer business, Robinson says, the data protection challenge has extra resonance for retailers. To offer customers better, more personalized experiences, retailers need to collect more data to analyze, opening them up to more risk of a data breach.
The supply chain—manufacturing, shipping, and and logistics— is also a key issue this year. The strain started showing in 2020, when pandemic lockdowns spread across the globe, exposing weaknesses in production processes and supply chains. And the US-China trade war caused many companies to look beyond China to Southeast Asian countries such as Vietnam or Thailand for production partners.
The supply chain isn’t only a financial concern. Robinson says ethical sourcing and manufacturing are becoming more important as consumers raise expectations about sustainability and worker safety. “That’s just going to continue to be more and more important as we move forward,” he adds.
Fortune favors the bold
It’s hard to plan for the long term during times of volatility—but that’s exactly what most businesses across industries are doing: more than half of surveyed organizations will ramp up technology investments in 2021, and 40% plan to move IT and business functions to the cloud (see Figure 1).
In some cases, the 2021 strategic plan is simply to ramp up for more business. Thriving companies that sell treadmill desks or sweatpants don’t need to change their business models. Because of increased demand at a time of heightened remote working, those retailers need only to fine-tune the manufacturing processes and work out shipping logistics.
But adapting to a new world means being open to new ideas. Business leaders ready to transform a company have to rethink everything: business models, product development, marketing processes, fulfilment, and success metrics. As a result, 87% of the organizations that expect business transformations in 2021 have some sort of big move planned.
Robinson believes now is the time to be bold, and retailers are realizing that. “People are going to be rewarded for taking chances and will probably be forgiven if it’s imperfect,” he says. When you are out of the usual options, try the unusual ones.
“Business didn’t stop just because of covid,” says Ashwat Panchal, vice president of internal audit at footwear retailer Skechers. “We’re expanding our distribution centers. We’re increasing our e-commerce footprint. We’re implementing new point-of-sale systems. We’re expanding into new territories.”
Borůvka's Algorithm is a greedy algorithm published by Otakar Borůvka, a Czech mathematician best known for his work in graph theory. Its most famous application helps us find the minimum spanning tree in a graph.
A thing worth noting about this algorithm is that it's the oldest minimum spanning tree algorithm, on record. Borůvka came up with it in 1926, before computers as we know them today even existed. It was published as a method of constructing an efficient electricity network.
In this guide, we'll take a refresher on graphs, and what minimum spanning trees are, and then jump into Borůvka's algorithm and implement it in Python:
A graph is a abstract structure that represents a group of certain objects called nodes (also known as vertices), in which certain pairs of those nodes are connected or related. Each one of these connections is called an edge.
A tree is an example of a graph:
In the image above, the first graphs has 4 nodes and 4 edges, while the second graph (a binary tree) has 7 nodes and 6 edges.
Graphs can be applied to many problems, from geospatial locations to social network graphs and neural networks. Conceptually, graphs like these are all around us. For example, say we'd like to plot a family tree, or explain to someone how we met our significant other. We might introduce a large number of people and their relationships to make the story as interesting to the listener as it was to us.
Since this is really just a graph of people (nodes) and their relationships (edges) - graphs are a great way to visualize this:
Types of Graphs
Depending on the types of edges a graph has, we have two distinct categories of graphs:
Undirected graphs
Directed graphs
An undirected graph is a graph in which the edges do not have orientations. All edges in an undirected graph are, therefore, considered bidirectional.
Formally, we can define an undirected graph as G = (V, E) where V is the set of all the graph's nodes, and E is a set that contains unordered pairs of elements from E, which represent edges.
Unordered pairs here means that the relationship between two nodes is always two-sided, so if we know there's an edge that goes from A to B, we know for sure that there's an edge that goes from B to A.
A directed graph is a graph in which the edges have orientations.
Formally, we can define a directed graph as G = (V, E) where V s the set of all the graph's nodes, and E is a set that contains ordered pairs of elements from E.
Ordered pairs imply that the relationship between two nodes can be either one or two-sided. Meaning that if there's an edge that goes from A to B, we can't know if there's an edge that goes from B to A.
The direction of an edge is denoted with an arrow. Keep in mind that two-sided relationships can be shown either by drawing two distinct arrows or just drawing two arrow points on either side of the same edge:
Another way to differentiate graphs based on their edges is regarding the weight of those edges. Based on that, a graph can be:
Weighted
Unweighted
A weighted graph is a graph in which every edge is assigned a number - its weight. These weights can represent the distance between nodes, capacity, price et cetera, depending on the problem we're solving.
Weighted graphs are used pretty often, for example in problems where we need to find the shortest or, as we will soon see, in problems in which we have to find a minimum spanning tree.
An unweighted graph does not have weights on its edges.
Note: In this article, we will focus on undirected, weighted graphs.
A graph can also be connected and disconnected. A graph is connected if there is a path (which consists of one or more edges) between each pair of nodes. On the other hand, a graphs is disconnected if there is a pair of nodes which can't aren't connected by a path of edges.
Trees and Minimum Spanning Trees
There's a fair bit to be said about trees, subgraphs and spanning trees, though here's a really quick and concise breakdown:
A tree is an undirected graph where each two nodes have exactly one path connecting them, no more, no less.
A subgraph of a graph A is a graph that is compromised of a subset of graph A's nodes and edges.
A spanning tree of graph A is a subgraph of graph A that is a tree, whose set of nodes is the same as graph A's.
A minimum spanning tree is a spanning tree, such that the sum of all the weights of the nodes is the smallest possible. Since it's a tree (and the edge weight sum should be minimal), there shouldn't be any cycles.
Note: In case all edge weights in a graph are distinct, the minimum spanning tree of that graph is going to be unique. However, if the edge weights are not distinct, there can be multiple minimum spanning trees for only one graph.
Now that we're covered in terms of graph theory, we can tackle the algorithm itself.
Borůvka's Algorithm
The idea behind this algorithm is pretty simple and intuitive. We mentioned before that this was a greedy algorithm.
When an algorithm is greedy, it constructs a globally "optimal" solution using smaller, locally optimal solutions for smaller subproblems. Usually, it converges with a good-enough solution, since following local optimums doesn't guarantee a globally optimum solution.
Simply put, greedy algorithms make the optimal choice (out of currently known choices) at each step of the problem, aiming to get to the overall most optimal solution when all of the smaller steps add up.
You could think of greedy algorithms as a musician who's improvising at a concert and will in every moment play what sounds the best. On the other hand, non-greedy algorithms are more like a composer, who'll think about the piece they're about to perform, and take their time to write it out as sheet music.
Now, we will break down the algorithm in a couple of steps:
We initialize all nodes as individual components.
We initialize the minimum spanning tree S as an empty set that'll contain the solution.
If there is more than one component:
Find the minimum-weight edge that connects this component to any other component.
If this edge isn't in the minimum spanning tree S, we add it.
If there is only one component left, we have reached the end of the tree.
This algorithm takes a connected, weighted and undirected graph as an input, and its output is the graph's corresponding minimum spanning tree.
Let's take a look at the following graph and find its minimum spanning tree using Borůvka's algorithm:
At the start, every represents an individual component. That means that we will have 9 components. Let's see what the smallest weight edges that connect these components to any other component would be:
Component
Smallest weight edge that connects it to some other component
Weight of the edge
{0}
0 - 1
4
{1}
0 - 1
4
{2}
2 - 4
2
{3}
3 - 5
5
{4}
4 - 7
1
{5}
3 - 5
10
{6}
6 - 7
1
{7}
4 - 7
1
{8}
7 - 8
3
Now, our graph is going to be in this state:
The green edges in this graph represent the edges that bind together its closest components. As we can see, now we have three components: {0, 1}, {2, 4, 6, 7, 8} and {3, 5}. We repeat the algorithm and try to find the minimum-weight edges that can bind together these components:
Component
Smallest weight edge that connects it to some other component
Weight of the edge
{0, 1}
0 - 6
7
{2, 4, 6, 7, 8}
2 - 3
6
{3, 5}
2 - 3
6
Now, our graph is going to be in this state:
As we can see, we are left with only one component in this graph, which represents our minimum spanning tree! The weight of this tree is 29, which we got after summing all of the edges:
Now, the only thing left to do is implement this algorithm in Python.
Implementation
We are going to implement a Graph class, which will be the main data structure we'll be working with. Let's start off with the constructor:
In this constructor, we provided the number of nodes in the graph as an argument, and we initialized three fields:
m_v - the number of nodes in the graph.
m_edges - the list of edges.
m_component - the dictionary which stores the index of the component which a node belongs to.
Now, let's make a helper function that we can use to add an edge to a graph's nodes:
def add_edge(self, u, v, weight):
self.m_edges.append([u, v, weight])
This function is going to add an edge in the format [first, second, edge weight] to our graph.
Because we want to ultimately make a method that unifies two components, we'll first need a method that propagates a new component throughout a given component. And secondly, we'll need a method that finds the component index of a given node:
def find_component(self, u):
if self.m_component[u] == u:
return u
return self.find_component(self.m_component[u])
def set_component(self, u):
if self.m_component[u] == u:
return
else:
for k in self.m_component.keys():
self.m_component[k] = self.find_component(k)
In this method, we will artificially treat the dictionary as a tree. We ask whether or not we've found the root of our component (because only root components will always point to themselves in the m_component dictionary). If we haven't found the root node, we recursively search the current node's parent.
Note: The reason we don't assume that m_components points to the correct component is because when we start unifying components, the only thing that we know for sure won't change its component index is the root components.
For example, in our graph in the example above, in the first iteration, the dictionary is going to look like this:
index
value
0
0
1
1
2
2
3
3
4
4
5
5
6
6
7
7
8
8
We've got 9 components, and each member is the component by itself. In the second iteration, it's going to look like this:
index
value
0
0
1
0
2
2
3
3
4
2
5
3
6
7
7
4
8
7
Now, tracing back to the roots, we'll see that our new components will be: {0, 1}, {2, 4, 7, 6, 8} and {3, 5}.
The last method we're going to need before implementing the algorithm itself is the method that unifies two components into one, given two nodes which belong to their respective components:
def union(self, component_size, u, v):
if component_size[u] <= component_size[v]:
self.m_component[u] = v
component_size[v] += component_size[u]
self.set_component(u)
elif component_size[u] >= component_size[v]:
self.m_component[v] = self.find_component(u)
component_size[u] += component_size[v]
self.set_component(v)
print(self.m_component)
In this function, we find the roots of components for two nodes (which are their component indexes at the same time). Then, we compare the components in terms of size, and attached the smaller one to the larger one. Then, we just add the size of the smaller one to the size of the larger one, because they are now one component.
Finally, if the components are of same size, we just unite them together however we want - in this particular example we did it by adding the second one to the first one.
Now that we've implemented all the utility methods we need, we can finally dive into Borůvka's algorithm:
def boruvka(self):
component_size = []
mst_weight = 0
minimum_weight_edge = [-1] * self.m_v
for node in range(self.m_v):
self.m_component.update({node: node})
component_size.append(1)
num_of_components = self.m_v
print("---------Forming MST------------")
while num_of_components > 1:
for i in range(len(self.m_edges)):
u = self.m_edges[i][0]
v = self.m_edges[i][1]
w = self.m_edges[i][2]
u_component = self.m_component[u]
v_component = self.m_component[v]
if u_component != v_component:
if minimum_weight_edge[u_component] == -1 or \
minimum_weight_edge[u_component][2] > w:
minimum_weight_edge[u_component] = [u, v, w]
if minimum_weight_edge[v_component] == -1 or \
minimum_weight_edge[v_component][2] > w:
minimum_weight_edge[v_component] = [u, v, w]
for node in range(self.m_v):
if minimum_weight_edge[node] != -1:
u = minimum_weight_edge[node][0]
v = minimum_weight_edge[node][1]
w = minimum_weight_edge[node][2]
u_component = self.m_component[u]
v_component = self.m_component[v]
if u_component != v_component:
mst_weight += w
self.union(component_size, u_component, v_component)
print("Added edge [" + str(u) + " - "
+ str(v) + "]\n"
+ "Added weight: " + str(w) + "\n")
num_of_components -= 1
minimum_weight_edge = [-1] * self.m_v
print("----------------------------------")
print("The total weight of the minimal spanning tree is: " + str(mst_weight))
The first thing we did in this algorithm was initialize additional lists we would need in the algorithm:
A list of components (initialized to all of the nodes).
A list that keeps their size (initialized to 1), as well as the list of the minimum weight edges (-1 at first, since we don't know what the minimum weight edges are yet).
Then, we go through all of the edges in the graph, and we find the root of components on both sides of those edges.
After that, we are looking for the minimum weight edge that connects these two components using a couple of if clauses:
If the current minimum weight edge of component u doesn't exist (is -1), or if it's greater than the edge we're observing right now, we will assign the value of the edge we're observing to it.
If the current minimum weight edge of component v doesn't exist (is -1), or if it's greater than the edge we're observing right now, we will assign the value of the edge we're observing to it.
After we've found the cheapest edges for each component, we add them to the minimum spanning tree, and decrease the number of components accordingly.
Finally, we reset the list of minimum weight edges back to -1, so that we can do all of this again. We keep iterating as long as there are more than one component in the list of components.
Let's put the graph we used in the example above as the input of our implemented algorithm:
The time complexity of this algorithm is O(ElogV), where E represents the number of edges, while V represents the number of nodes.
The space complexity of this algorithm is O(V + E), since we have to keep a couple of lists whose sizes are equal to the number of nodes, as well as keep all the edges of a graph inside of the data structure itself.
Conclusion
Even though Borůvka's algorithm is not as well known as some other minimum spanning tree algorithms like Prim's or Kruskal's minimum spanning tree algorithms, it gives us pretty much the same result - they all find the minimum spanning tree, and the time complexity is approximately the same.
One advantage that Borůvka's algorithm has compared to the alternatives is that it doesn't need to presort the edges or maintain a priority queue in order to find the minimum spanning tree. Even though that doesn't help its complexity, since it still passes the edges logE times, it is a bit more simple to code.
Our best practices for managing data are ancient. Literally.
For tens of thousands of years we’ve managed the future by predetermining the resources we think we’ll need, limiting our futures to what we can foresee. We’re now in the age of AI with a radically more open future. AI–especially machine learning–can learn from seemingly insignificant data, often to our human delight and surprise.
So, it’s time to rethink our processes and mindset when it comes to data. We need to stop limiting data to the futures we expect. That’s not easy because predetermination is a deep habit that guides much of our lives. For instance, we fill our kitchen pantries with the ingredients we think we need to save space, time, and money.
But that predetermination inhibits innovation. We’re unlikely to try, say, a new Indonesian recipe if we’re missing kaffir lime leaves.
Worse, our imagination itself is muted by our awareness of the supplies on hand. This is an even more pernicious type of predetermination because we often don’t recognize that it’s happening.
Call it predetermination bias.
Culling Bias
Predetermination bias is often hardcoded into the ETL (extract, transform, load) pipelines enterprises use to manage data today. Typically only a small stream of application data, about 5% to 10% of the total, makes it through the pipeline and lands in a data warehouse for analysis.
(Andrii Yalanskyi/Shutterstock)
Data lakes and NoSQL data stores improve upon the process by switching ETL to ELT. That’s a good start, but it’s not enough. Extracting data leaves most of the data behind. Loading results in yet another copy of data that keeps getting bigger and bigger. And transforming data still strips off what is unique in order to standardize it.
Data warehouses, data lakes, and NoSQL stores are useful, just like kitchen pantries. But they reduce the total available information.
In the Age of Machine Learning, we’re learning the importance of what might seem like insignificant data. Machine learning neural networks compute data weights from webs of correlations often too complex for our minds to unravel. The amount and complexity of the data they process is why machines can now beat the world’s human champions at chess and can diagnose diseases from data in ways that doctors and computer scientists cannot understand.
Then there’s the lesson from the tech giants. When your applications service billions of users, you have to rely on algorithms to make immediate judgments based on as much data as possible.
Now it’s time to end data predetermination for the rest of the world’s businesses. We need a new model for thinking about and acting on data.
Leave No Data Behind
If predetermination, ETL, and ELT are passé, then what does a modern data process look like?
Here are seven guidelines for modern data management:
Access to all the primary data: Most of the valuable data in an enterprise sits in enterprise applications, from mainframe to cloud native. Instead of settling for a thin stream of ETL or ELT data, we need access to all of the primary data across multi-generational platforms. And we need an efficient way to access it that does not impact or add risk to production applications.
(Semisatch/Shutterstock)
API-driven access: Scurrying from department to department to learn the magical phrases to get data access takes too long. In today’s world, we need APIs (application programming interfaces) that provide simple, uniform ways to request data from all sources.
Data privacy and compliance: Regulatory compliance cannot be an afterthought. Today, companies must pursue responsible innovation by securing data used in analytics and training AI models. Enterprise data needs to be masked to keep personally identifiable and other sensitive information from reaching the wrong hands.
Data history and integrity:Data changes quickly over time, and data relationships matter. When feeding data from different sources into machine learning, it’s critical to make sure all the data comes from precisely the same time to preserve relationship integrity. In addition, historical data can be used to iteratively train and test machine learning models to tune and improve outcomes.
Version control: The world continues to change, so models that work today may fail tomorrow. That means we need version control—access to the source data used to train the failing models so we can perform drift analysis so we can see what has changed in the data to properly retune and retrain our models.
Automation: While machine learning is sophisticated and high tech, most of the daily work is manual and prosaic—data wrangling, preparation, cleanup, and separating datasets for training, testing, and validation. All of these operations add friction to the process. Automation overcomes that friction, enabling faster and more effective use of data.
Data anywhere: Today, enterprise applications live across the muli-cloud—SaaS, private clouds, and public clouds. And cloud vendors are constantly evolving and competing on AI technology offerings. So it’s critical for companies to be able sync compliant data wherever they need to best process data for strategic advantage.
These seven guidelines don’t spell the end of the data warehouse. There are times when we know exactly the data we need now and in the future. In those situations, predetermination still holds value.
But in the land of tech giants and AI, we need a new model to keep up with the data and the times.
About the author: Jedidiah Yueh has led two waves of disruption in data management, first as founding CEO of Avamar (sold to EMC in 2006 for $165M), which pioneered data de-duplication and shipped one of the leading products in data backup and recovery, with over 20,000 customers and $5B in cumulative sales. After Avamar, Jed founded Delphix, which provides an API-first data platform to accelerate digital transformation for over 25% of the Global 100 and has surpassed $100 million in ARR. In 2013, the San Francisco Business Times named Jed CEO of the Year. Jed is the bestselling author of Disrupt or Die, a book that refutes conventional ideas on innovation with proven frameworks from Silicon Valley. After being designated a US Presidential Scholar by George H. Bush, Jed graduated Phi Beta Kappa, magna cum laude from Harvard, while working three jobs, including teaching at a local high school.
var
LJSONArray: TJSONArray;
LJSONObject: TJSONObject;
begin
LJSONObject := qrySamples.ToJSONObject(); // export a single record
LJSONArray := qrySamples.ToJSONArray(); // export all records
end;
As the US government pumps billions of dollars into projects aimed at curbing the pandemic, from vaccine development to genomic sequencing, officials claim they are being transparent about how money is being spent. But government contractors have a lot of leeway to hide things, as shown by a recent records request filed by MIT Technology Review.
After reporting on the struggles of the US’s $44 million vaccine management system, we requested documents related to the CDC’s no-bid contracts for the underlying software, awarded to consulting giant Deloitte. The records we got back had significant redactions—including the company’s costs, the identities of those who worked on the project, and even Deloitte’s explanation for why it was qualified to do the job.
The CDC paid Deloitte to build a system that would help doctors manage vaccine inventory and report shots, let eligible people schedule appointments, and send out second-shot reminders and proofs of vaccination.
Months after the contracted deadline, Deloitte delivered a customized version of a preexisting Salesforce product called Vaccine Cloud. It was so difficult to use that only a handful of states signed up, as we reported in January.
But the documents released under the Freedom of Information Act deliberately blocked certain pieces of information from the public record, including what prior experience Deloitte had with building similar tools and how charges like travel expenses and labor were justified or broken down. They also redacted the names of everyone involved—even the communications person assigned to the project, who would likely be responsible for speaking to the media.
As part of our reporting, we requested several Deloitte contracts unrelated to the vaccine system from the US Food and Drug Administration. That agency also redacted similar information.
“It’s basically a rubber stamp“
All the redactions cite a rule in the Freedom of Information Act commonly referred to as Exemption 4, which allows companies to hide “commercial information” such as trade secrets from the public.
The contractor, rather than the government, decides what is considered sensitive information. When a government agency receives a request for records, it sends that request to the contractors, who mark what they want to keep secret.
Companies have essentially free rein to call contract details “confidential business information,” thanks to a 2019 decision by the Supreme Court. Before that, companies had to explain why releasing the information would cause “substantial harm” to their business.
“Now all the agency has to do is get an affidavit from someone at the company that says, ‘We treat this as confidential business information.’ Period. Full stop,” says Victoria Baranetsky, the general counsel at the Center for Investigative Reporting. “It’s basically a rubber stamp.”
The court’s decision in Food Marketing Institute v. Argus Leader, written by Justice Neil Gorsuch, argued that companies like Amazon should be allowed to hide how much money they receive in federal food stamps, without having to explain why.
The decision has led to increasing secrecy about the business of government, according to Baranetsky.
“The number of contractors in our country is ballooning,” she says. “The substance of material they are responsible for is more core to our basic civil rights and civil liberties than ever before.”
In fact, when requesters protest Exemption 4 redactions in court, government lawyers will even defend the contractors, using the company’s arguments at taxpayers’ expense.
“We have contractors holding children at the border. They work for the military. They’re building the border wall, setting up prisons and schools,” says Baranetsky. “It’s just this shell game of information about how our system is operating.”
In 2023, NASA will launch VIPER (Volatiles Investigating Polar Exploration Rover), which that will trek across the surface of the moon and hunt for water ice that could one day be used to make rocket fuel. The rover will be armed with the best instruments and tools that NASA can come up with: wheels that can spin properly on lunar soil, a drill that’s able to dig into extraterrestrial geology, hardware that can survive 14 days of a lunar night when temperatures sink to ˗173 °C.
But while much of VIPER is one of a kind, custom-made for the mission, much of the software that it’s running is open-source, meaning it’s available for use, modification, and distribution by anyone for any purpose. If it’s successful, the mission may be about more than just laying the groundwork for a future lunar colony—it may also be an inflection point that causes the space industry to think differently about how it develops and operates robots.
Open-source tech rarely comes to mind when we talk about space missions. It takes a tremendous amount of money to build something that can be launched into space, make its way to its proper destination, and then fulfill a specific set of tasks hundreds or thousands (or hundreds of thousands) of miles away. Keeping the know-how to pull those things off close to one’s chest is a natural inclination. Open-source software, meanwhile, is more usually associated with scrappy programming for smaller projects, like hackathons or student demos. The code that fills online repositories like GitHub is often an inexpensive solution for groups running low on cash and resources needed to build code from scratch.
But the space industry is surging, in no small part because there’s a demand for increased access to space. And that means the use of technologies that are less expensive and more accessible, including software.
Even for bigger groups like NASA, where money’s not an issue, the open-source approach may end up leading to stronger software. “Flight software right now, I would say, is pretty mediocre in space,” says Dylan Taylor, the chairman and CEO of Voyager Space Holdings. (Case in point: Boeing’s Starliner test flight failure in 2019, which was due to software glitches.) If it’s open-source, the smartest scientists can still leverage a larger community’s expertise and feedback if it runs into problems, just as amateur developers do.
Basically, if it’s good enough for NASA, it should presumably be good enough for anyone else trying to operate a robot off this planet. With an ever-increasing number of new companies and new national agenciesaround the world seeking to launch their own satellites and probes into space while keeping costs down, cheaper robotics software that can confidently handle something as risky as a space mission is a huge boon.
Open-source software can also help make getting to space cheaper because it leads to standards everyone can adopt and work with. You can eliminate the high costs associated with specialized coding. Open-source frameworks are usually something new engineers have already worked with, too. “If we can just leverage that and increase this pipeline from what they’ve learned in school to what they use in flight missions, that shortens the learning curve,” says Terry Fong, director of the Intelligent Robotics Group at NASA Ames Research Center in Mountain View, California, and deputy lead for the VIPER mission. “It makes things faster for us to take advances from the research world and put it into flight.”
NASA has been using open-source software in many R&D projects for about 10 to 15 years now—the agency keeps a very extensive catalogue of the open-source code it has used. But this technology’s role in actual robots sent to space is still nascent. One system the agency has trialed is the Robot Operating System, a collection of open-source software frameworks maintained and updated by the nonprofit Open Robotics, also headquartered in Mountain View. ROS is already used in Robonaut 2, the humanoid robot that has helped with research on the International Space Station, as well as the autonomous Astrobee robots buzzing around the ISS to help astronauts run day-to-day tasks.
The Astrobee robot on the International Space Station runs on ROS.
NASA
ROS will be running and facilitating tasks critical to something called “ground flight control.” VIPER is going to be driven around by NASA personnel who will be operating things from Earth. Ground flight control will take data collected by VIPER to build real-time maps and renderings of the environment on the moon that the rover’s drivers can use to navigate safely. Other parts of the rover’s software have open-source roots as well: basic functions like telemetry and memory management are handled onboard by a program called core Flight System (cFS), developed by NASA itself and available for free on GitHub. VIPER’s mission operations outside of the rover itself are handled by Open MCT, also created by NASA.
Compared with Mars, the lunar environment is very difficult to physically emulate on Earth, which means testing out a rover’s hardware and software components isn’t easy. For this mission, says Fong, it made more sense to lean on digital simulations that could test many of the rover’s components—and that included the open-source software.
Another reason the mission lends itself to use of open-source software is that the moon is close enough for near-real-time control of the rover, which means some of the software doesn’t need to be on the rover itself and can run on Earth instead.
“We decided to have the robot’s brains split between the moon and Earth,” says Fong. “And as soon as we did that, it opened up the possibility that we can use software that’s not limited by radiation, hard flight, computing—but instead, we can just use off-the-shelf commodity commercial desktops. So we can make use of things like ROS on the ground, something used by so many people so regularly. We don’t have to just rely on custom software.”
VIPER isn’t running on 100% open-source software—its onboard flight system, for instance, uses extremely reliable proprietary software. But it’s easy to see future missions adopting and expanding on what VIPER will run. “I suspect that maybe the next rover from NASA will run Linux,” says Fong.
It will never be possible to use open-source software in all cases. Security concerns could be an issue, and might cause some parties to stick to proprietary tech entirely (although one plus to open-source platforms is that developers are often very public about finding flaws and proposing patches). And Fong also emphasizes that some missions will always be too specialized or advanced to rely heavily on open-source technology.
Still, it’s not just NASA that is turning to the open-source community. Blue Origin recently announced a partnership with several NASA groups to “code robotic intelligence and autonomy” built from open-source frameworks (the company declined to provide details). Smaller initiatives like the Libre Space Foundation based in Greece, which provides open-source hardware and software for small satellite activities, are bound to gain more attention as spaceflight continues to get cheaper. “There’s a domino effect there,” says Brian Gerkey, the CEO of Open Robotics. “Once you have a large organization like NASA saying publicly, ‘We’re depending on this software,’ then other organizations are willing to take a chance and dig in and do the work that’s necessary to make it work for them.”
Dashboards, a graphical visualization of data, seem to be everywhere, especially in these pandemic times where daily and weekly trends take on a very personal significance. In this article we are going to look at a beautiful dashboard which you can very easily customize to suit your own needs.
Visualization is the art of making the useful, beautiful.
There is only so far you can get with tables of facts and figures. Sooner or later there comes a time when you will need to create a graphical visualization of your data. Pictures really do save a thousand words, but they also help your users get a grasp of the information in a more easily consumed way.
Most Delphi programmers are probably familiar with the basic TChart component which has been bundled with nearly every version of Delphi. For more recent versions of RAD Studio such as Sidney, you had to tick an optional checkbox to get the TChart component to appear on your component palette. That bundled version of the TChart component is provided by Steema Software based in Spain.
The dashboard we are writing about here is also produced by Steema to demonstrate the power of their Pro versions of TeeChart as well as their TeeGrid. More on that in a moment.
I used the cross-platform GitHub desktop project to download the sample directly from the repository.
What components do you need?
To work with Steema’s dashboard visualization example you’re going to need a copy of their Pro TeeChart component. This is NOT the same as the bundled version. In fact, if you have the bundled version installed you will need to completely uninstall it first because it will clash with the Pro version. You will also need to install Steema’s TeeGrid component too.
What if I don’t have the TeeChart Pro or TeeGrid component packs?
Not a problem – Steema’s website has a download for a fully functional 30-day trial of both component libraries. I used the trial versions to write this article and they worked without any problems.
Just make sure you uninstall the bundled version of TeeChart that came with RAD Studio FIRST because I didn’t remember and got myself into a bit of tangle. If that happens to you, uninstall both the new component packs using their uninstaller. Then go into the IDE, select “component” from the menu, then “install packages”. Now scroll down and make sure all references to the TeeChart and TeeGrid components are gone. Click on any that are there, and then “remove”.
Now close the IDE and install the TeeChart Pro and TeeGrid components using their installers, and all will be well. I make these mistakes, so you don’t have to!
What does the dashboard visualization do?
Well, the source code reads from an included SQLite database. Almost all the data retrieval is done using LiveBindings. All the data access components use the FireDAC query components. There are a few areas in the program where the data is read and manipulated in code but overall, it’s nearly all the LiveBindings which do the heavy lifting.
The dashboard visualization is then either displayed in a regular Delphi FMX form or it can be extracted out to a bunch of HTML-based web pages. As demos go it’s masterful and apart from showing the power of the TeeChart and TeeGrid components it also demonstrates quite how far RAD Studio and Delphi go towards making your life as a programming software developer a lot easier. I often say at webinars that “Delphi is my superpower”. It’s this kind of gorgeous yet useful visualization which can be a lot harder to achieve in other programming languages.
The maps and charts are interactive with most of the work being handled by the components themselves.
Is there anything else?
Did I mention this demo is also completely cross-platform? Thanks to the visualization being written using FireMonkey the demo works on Windows (of course), macOS, iOS AND Android. I didn’t try FMXLinux (which now comes included with some versions of RAD Studio – see GetIt for details) but it’s entirely possible that would work too – if you try and get the dashboard visualization to work why not drop me a line in the comments below?
Anyway, do try out this superb demo. Steema’s components are excellent in their own right but combined with this dashboard they can add that extra pizazz to your apps, no matter what platform, desktop or mobile.
Currently, the internet is hugely popular around the globe, and an individual can hardly think about a life without the internet. The solution to each problem is found on the internet today. This is another reason why progressive people of the current generation rely on virtual services to fulfill all their necessities. In addition to it, people can find almost every tiny thing on the internet, ranging from services to products and medicine to education.
Although online shopping has created a wider room for you to fulfill your needs, it has made you lazier. However, this has led to the introduction of more and more online websites. Some of these websites are dynamic, and some are static. Static websites are showcased precisely in the same manner that they are stored in. Moreover, they can be modified by developers only.
On the other hand, dynamic websites are easy to handle. They can be modified and altered by users themselves. The best part is that the users can change it without knowing website design and development.
However, if you reside in Sydney and are wondering about the benefits of custom web design Sydney, here are the top 8 points that indicate the benefits of building a custom web design for your business.
Adaptability
An ideal custom site will be endorsed with all the features needed for marketing a business. The web designer will assist you by compiling a list of priorities that they will further incorporate into the structure of your website. If you have a low budget, you can discuss such features that you can add to your site later with the designer.
Intriguing design
A custom web design displays unique and intriguing features. To be more precise, no other business or individual would hold a similar or exact design as your website. This will further help your brand to stand out from the rest of the crowd.
Custom-fit
The cost of a custom web design is comparatively less. Furthermore, it enables you to build aesthetic prospects of templates and gives you the functionality of the website, which is particularly tailored to meet your unique business and customer requirements. Also, due consideration is provided to the user’s experience, navigation, overall personality, visual graphics, color scheme, and layout of the website.
Branding
Building a custom website helps you in your business branding as well. With custom graphics, you can effortlessly stand out from the crowd. The best part is, your visitors will still remember you because of your custom site. Know that the visitors are not only reading your website’s content, but this further means that they are staying a little longer on your website. This helps you drive better conversions.
SEO optimized
Opting for a custom website design enables your website to be built following certain SEO techniques. This further creates room for bringing higher rankings to your site on search engines.
Grow the reputation
An online business ultimately depends on attention. You will know that your online business is getting the proper attention if your website attracts quite a lot of visitors. You will attract visitors only when you have an amazing website. Building a customized website helps you establish a rapport for your business, thereby building a unique brand for your company, which enhances its overall look.
Ownership & Control
By building a custom website design, you gain ownership of your web design and code. In addition to it, you gain absolute control over your site.
Scalability
Custom web design enables incorporating an informational architecture that is highly effective for the growth of your business. You can execute further integration and personalization with several other platforms like eCommerce and social networking tools. Although the pricing of custom website designs is higher at the initial stage, these sites provide long-term growth and better ROI.
Conclusion
Apart from the above-listed benefits, there are many other design factors to pay attention to, including creating unified graphics, website management, and content provision. With the help of a custom-made website design, you not only achieve an ideal website that suits your business requirements at present. It also has the appropriate technology to continue assisting your business in the long run.
The Microsoft campus in Redmond, Wash.(GeekWire Photo / Todd Bishop)
UPDATE: Microsoft on Monday confirmed it will buy Nuance for $19.7 billion. Nuance CEO Mark Benjamin will retain his position and report to Azure chief Scott Guthrie. The deal is expected to close later this year.
“Nuance provides the AI layer at the healthcare point of delivery and is a pioneer in the real-world application of enterprise AI,” Microsoft CEO Satya Nadella said in a statement. “AI is technology’s most important priority, and healthcare is its most urgent application. Together, with our partner ecosystem, we will put advanced AI solutions into the hands of professionals everywhere to drive better decision-making and create more meaningful connections, as we accelerate growth of Microsoft Cloud for Healthcare and Nuance.”
———
Original story: Microsoft is in “advanced talks” to acquire Nuance Communications for $16 billion, according to a report from Bloomberg on Sunday.
Nuance background: The Boston-area publicly traded company specializes in “conversational AI” for applications in healthcare, telecommunications, automotive, financial services, and more. Nuance reported revenue of $345.8 million in the quarter ended Dec. 31, down 4%, and non-GAAP net income of $91.4 million, up slightly year-over-year. The company’s stock has nearly tripled since March 2020 and its market capitalization is $13 billion.
Previous connections: Microsoft has teamed up with Nuance in the past on healthcare-related deals. Nuance also has a large presence in the Seattle region near Microsoft’s HQ as a result of several acquisitions including VoiceBox, Swype, Tweedle, Varolii, and Jott. Nuance in February acquired Saykara, a Seattle health-tech startup that makes a voice assistant for clinicians.
A big deal: At its reported price, the acquisition would be Microsoft’s second-largest to date, behind its $26.2 billion purchase of LinkedIn. It reflects the company’s continued investment in artificial intelligence, speech technology, and healthcare. Microsoft is on the “M&A warpath over the next 12-to-18 months,” according to Dan Ives, an analyst with Wedbush, citing recent reports of Microsoft’s interest in buying Discord and its $7.5 billion acquisition of ZeniMax.
We’ve reached out to Microsoft for comment and will update if we hear back.
Themba Sivate has been a Delphi programmer since 2012. He introduced his application (ST Audio Player Lite) at the Delphi 26th Showcase Challenge and we got to converse with him to have an insight on his Delphi expertise. Find out more about his software at ST Software
When did you start using RAD Studio/Delphi and have long have you been using it?
I started using RAD Studio on 2012 at University. A year later, I started to learning building applications from scratch using C++ builder installed on institution’s machines, until today.
What was it like building software before you had RAD Studio/Delphi?
I tried both QT Creator and visual studio before, and it was a pain and limitations too. Rad Studio will let you call Delphi code within C++. Meaning you can reuse Delphi libraries on c++,
How did RAD Studio/Delphi help you create your showcase application?
RAD Studio is easy to use, simplified drag and drop, simplified packaging. It help me to complete my audio player in less time.
What made RAD Studio/Delphi stand out from other options?
Easy to use, backwards compatibility, tons of libraries/components, simplified drag and drop, cross-platform outputs.
What made you happiest about working with RAD Studio/Delphi?
Less development time. Good database handling and development. Its sad that the I cant afford the license at this stage, but I’d be happy to build the list of applications I had on my mind which requires a paid version of RAD Studio.
What have you been able to achieve through using RAD Studio/Delphi to create your showcase application?
Debugging was pretty easy and straight forward. Most can be achieved by modifying values of the properties, without writing a line code.
What are some future plans for your showcase application?
I’m planning to upload it on windows store. Future releases are planned and bug fixes. Also planned to support multiple languages.
Thank you, Themba! You can check out his software’s showcase entry below.
Gradient descent is an optimization algorithm that follows the negative gradient of an objective function in order to locate the minimum of the function.
A limitation of gradient descent is that it uses the same step size (learning rate) for each input variable. AdaGradn and RMSProp are extensions to gradient descent that add a self-adaptive learning rate for each parameter for the objective function.
Adadelta can be considered a further extension of gradient descent that builds upon AdaGrad and RMSProp and changes the calculation of the custom step size so that the units are consistent and in turn no longer requires an initial learning rate hyperparameter.
In this tutorial, you will discover how to develop the gradient descent with Adadelta optimization algorithm from scratch.
After completing this tutorial, you will know:
Gradient descent is an optimization algorithm that uses the gradient of the objective function to navigate the search space.
Gradient descent can be updated to use an automatically adaptive step size for each input variable using a decaying average of partial derivatives, called Adadelta.
How to implement the Adadelta optimization algorithm from scratch and apply it to an objective function and evaluate the results.
Let’s get started.
Gradient Descent With Adadelta from Scratch Photo by Robert Minkler, some rights reserved.
Tutorial Overview
This tutorial is divided into three parts; they are:
It is technically referred to as a first-order optimization algorithm as it explicitly makes use of the first-order derivative of the target objective function.
First-order methods rely on gradient information to help direct the search for a minimum …
The first order derivative, or simply the “derivative,” is the rate of change or slope of the target function at a specific point, e.g. for a specific input.
If the target function takes multiple input variables, it is referred to as a multivariate function and the input variables can be thought of as a vector. In turn, the derivative of a multivariate target function may also be taken as a vector and is referred to generally as the gradient.
Gradient: First-order derivative for a multivariate objective function.
The derivative or the gradient points in the direction of the steepest ascent of the target function for a specific input.
Gradient descent refers to a minimization optimization algorithm that follows the negative of the gradient downhill of the target function to locate the minimum of the function.
The gradient descent algorithm requires a target function that is being optimized and the derivative function for the objective function. The target function f() returns a score for a given set of inputs, and the derivative function f'() gives the derivative of the target function for a given set of inputs.
The gradient descent algorithm requires a starting point (x) in the problem, such as a randomly selected point in the input space.
The derivative is then calculated and a step is taken in the input space that is expected to result in a downhill movement in the target function, assuming we are minimizing the target function.
A downhill movement is made by first calculating how far to move in the input space, calculated as the steps size (called alpha or the learning rate) multiplied by the gradient. This is then subtracted from the current point, ensuring we move against the gradient, or down the target function.
x = x – step_size * f'(x)
The steeper the objective function at a given point, the larger the magnitude of the gradient, and in turn, the larger the step taken in the search space. The size of the step taken is scaled using a step size hyperparameter.
Step Size (alpha): Hyperparameter that controls how far to move in the search space against the gradient each iteration of the algorithm.
If the step size is too small, the movement in the search space will be small and the search will take a long time. If the step size is too large, the search may bounce around the search space and skip over the optima.
Now that we are familiar with the gradient descent optimization algorithm, let’s take a look at Adadelta.
Adadelta Algorithm
Adadelta (or “ADADELTA”) is an extension to the gradient descent optimization algorithm.
Adadelta is designed to accelerate the optimization process, e.g. decrease the number of function evaluations required to reach the optima, or to improve the capability of the optimization algorithm, e.g. result in a better final result.
It is best understood as an extension of the AdaGrad and RMSProp algorithms.
AdaGrad is an extension of gradient descent that calculates a step size (learning rate) for each parameter for the objective function each time an update is made. The step size is calculated by first summing the partial derivatives for the parameter seen so far during the search, then dividing the initial step size hyperparameter by the square root of the sum of the squared partial derivatives.
The calculation of the custom step size for one parameter with AdaGrad is as follows:
Where cust_step_size(t+1) is the calculated step size for an input variable for a given point during the search, step_size is the initial step size, sqrt() is the square root operation, and s(t) is the sum of the squared partial derivatives for the input variable seen during the search so far (including the current iteration).
RMSProp can be thought of as an extension of AdaGrad in that it uses a decaying average or moving average of the partial derivatives instead of the sum in the calculation of the step size for each parameter. This is achieved by adding a new hyperparameter “rho” that acts like a momentum for the partial derivatives.
The calculation of the decaying moving average squared partial derivative for one parameter is as follows:
s(t+1) = (s(t) * rho) + (f'(x(t))^2 * (1.0-rho))
Where s(t+1) is the mean squared partial derivative for one parameter for the current iteration of the algorithm, s(t) is the decaying moving average squared partial derivative for the previous iteration, f'(x(t))^2 is the squared partial derivative for the current parameter, and rho is a hyperparameter, typically with the value of 0.9 like momentum.
Adadelta is a further extension of RMSProp designed to improve the convergence of the algorithm and to remove the need for a manually specified initial learning rate.
The idea presented in this paper was derived from ADAGRAD in order to improve upon the two main drawbacks of the method: 1) the continual decay of learning rates throughout training, and 2) the need for a manually selected global learning rate.
The decaying moving average of the squared partial derivative is calculated for each parameter, as with RMSProp. The key difference is in the calculation of the step size for a parameter that uses the decaying average of the delta or change in parameter.
This choice of numerator was to ensure that both parts of the calculation have the same units.
After independently deriving the RMSProp update, the authors noticed that the units in the update equations for gradient descent, momentum and Adagrad do not match. To fix this, they use an exponentially decaying average of the square updates
First, the custom step size is calculated as the square root of the decaying moving average of the change in the delta divided by the square root of the decaying moving average of the squared partial derivatives.
Where cust_step_size(t+1) is the custom step size for a parameter for a given update, ep is a hyperparameter that is added to the numerator and denominator to avoid a divide by zero error, delta(t) is the decaying moving average of the squared change to the parameter (calculated in the last iteration), and s(t) is the decaying moving average of the squared partial derivative (calculated in the current iteration).
The ep hyperparameter is set to a small value such as 1e-3 or 1e-8. In addition to avoiding a divide by zero error, it also helps with the first step of the algorithm when the decaying moving average squared change and decaying moving average squared gradient are zero.
Next, the change to the parameter is calculated as the custom step size multiplied by the partial derivative
change(t+1) = cust_step_size(t+1) * f'(x(t))
Next, the decaying average of the squared change to the parameter is updated.
Where delta(t+1) is the decaying average of the change to the variable to be used in the next iteration, change(t+1) was calculated in the step before and rho is a hyperparameter that acts like momentum and has a value like 0.9.
Finally, the new value for the variable is calculated using the change.
x(t+1) = x(t) – change(t+1)
This process is then repeated for each variable for the objective function, then the entire process is repeated to navigate the search space for a fixed number of algorithm iterations.
Now that we are familiar with the Adadelta algorithm, let’s explore how we might implement it and evaluate its performance.
Gradient Descent With Adadelta
In this section, we will explore how to implement the gradient descent optimization algorithm with Adadelta.
Two-Dimensional Test Problem
First, let’s define an optimization function.
We will use a simple two-dimensional function that squares the input of each dimension and define the range of valid inputs from -1.0 to 1.0.
The objective() function below implements this function
# objective function
def objective(x, y):
return x**2.0 + y**2.0
We can create a three-dimensional plot of the dataset to get a feeling for the curvature of the response surface.
The complete example of plotting the objective function is listed below.
# 3d plot of the test function
from numpy import arange
from numpy import meshgrid
from matplotlib import pyplot
# objective function
def objective(x, y):
return x**2.0 + y**2.0
# define range for input
r_min, r_max = -1.0, 1.0
# sample input range uniformly at 0.1 increments
xaxis = arange(r_min, r_max, 0.1)
yaxis = arange(r_min, r_max, 0.1)
# create a mesh from the axis
x, y = meshgrid(xaxis, yaxis)
# compute targets
results = objective(x, y)
# create a surface plot with the jet color scheme
figure = pyplot.figure()
axis = figure.gca(projection='3d')
axis.plot_surface(x, y, results, cmap='jet')
# show the plot
pyplot.show()
Running the example creates a three dimensional surface plot of the objective function.
We can see the familiar bowl shape with the global minima at f(0, 0) = 0.
Three-Dimensional Plot of the Test Objective Function
We can also create a two-dimensional plot of the function. This will be helpful later when we want to plot the progress of the search.
The example below creates a contour plot of the objective function.
# contour plot of the test function
from numpy import asarray
from numpy import arange
from numpy import meshgrid
from matplotlib import pyplot
# objective function
def objective(x, y):
return x**2.0 + y**2.0
# define range for input
bounds = asarray([[-1.0, 1.0], [-1.0, 1.0)
# sample input range uniformly at 0.1 increments
xaxis = arange(bounds[0,0], bounds[0,1], 0.1)
yaxis = arange(bounds[1,0], bounds[1,1], 0.1)
# create a mesh from the axis
x, y = meshgrid(xaxis, yaxis)
# compute targets
results = objective(x, y)
# create a filled contour plot with 50 levels and jet color scheme
pyplot.contourf(x, y, results, levels=50, cmap='jet')
# show the plot
pyplot.show()
Running the example creates a two-dimensional contour plot of the objective function.
We can see the bowl shape compressed to contours shown with a color gradient. We will use this plot to plot the specific points explored during the progress of the search.
Two-Dimensional Contour Plot of the Test Objective Function
Now that we have a test objective function, let’s look at how we might implement the Adadelta optimization algorithm.
Gradient Descent Optimization With Adadelta
We can apply the gradient descent with Adadelta to the test problem.
First, we need a function that calculates the derivative for this function.
f(x) = x^2
f'(x) = x * 2
The derivative of x^2 is x * 2 in each dimension. The derivative() function implements this below.
# derivative of objective function
def derivative(x, y):
return asarray([x * 2.0, y * 2.0])
Next, we can implement gradient descent optimization.
First, we can select a random point in the bounds of the problem as a starting point for the search.
This assumes we have an array that defines the bounds of the search with one row for each dimension and the first column defines the minimum and the second column defines the maximum of the dimension.
...
# generate an initial point
solution = bounds[:, 0] + rand(len(bounds)) * (bounds[:, 1] - bounds[:, 0])
Next, we need to initialize the decaying average of the squared partial derivatives and squared change for each dimension to 0.0 values.
...
# list of the average square gradients for each variable
sq_grad_avg = [0.0 for _ in range(bounds.shape[0])]
# list of the average parameter updates
sq_para_avg = [0.0 for _ in range(bounds.shape[0])]
We can then enumerate a fixed number of iterations of the search optimization algorithm defined by a “n_iter” hyperparameter.
...
# run the gradient descent
for it in range(n_iter):
...
The first step is to calculate the gradient for the current solution using the derivative() function.
We then need to calculate the square of the partial derivative and update the decaying moving average of the squared partial derivatives with the “rho” hyperparameter.
...
# update the average of the squared partial derivatives
for i in range(gradient.shape[0]):
# calculate the squared gradient
sg = gradient[i]**2.0
# update the moving average of the squared gradient
sq_grad_avg[i] = (sq_grad_avg[i] * rho) + (sg * (1.0-rho))
We can then use the decaying moving average of the squared partial derivatives and gradient to calculate the step size for the next point. We will do this one variable at a time.
...
# build solution
new_solution = list()
for i in range(solution.shape[0]):
...
First, we will calculate the custom step size for this variable on this iteration using the decaying moving average of the squared changes and squared partial derivatives, as well as the “ep” hyperparameter.
...
# calculate the step size for this variable
alpha = (ep + sqrt(sq_para_avg[i])) / (ep + sqrt(sq_grad_avg[i]))
Next, we can use the custom step size and partial derivative to calculate the change to the variable.
...
# calculate the change
change = alpha * gradient[i]
We can then use the change to update the decaying moving average of the squared change using the “rho” hyperparameter.
...
# update the moving average of squared parameter changes
sq_para_avg[i] = (sq_para_avg[i] * rho) + (change**2.0 * (1.0-rho))
Finally, we can change the variable and store the result before moving on to the next variable.
...
# calculate the new position in this variable
value = solution[i] - change
# store this variable
new_solution.append(value)
This new solution can then be evaluated using the objective() function and the performance of the search can be reported.
We can tie all of this together into a function named adadelta() that takes the names of the objective function and the derivative function, an array with the bounds of the domain and hyperparameter values for the total number of algorithm iterations and rho, and returns the final solution and its evaluation.
The ep hyperparameter can also be taken as an argument, although has a sensible default value of 1e-3.
This complete function is listed below.
# gradient descent algorithm with adadelta
def adadelta(objective, derivative, bounds, n_iter, rho, ep=1e-3):
# generate an initial point
solution = bounds[:, 0] + rand(len(bounds)) * (bounds[:, 1] - bounds[:, 0])
# list of the average square gradients for each variable
sq_grad_avg = [0.0 for _ in range(bounds.shape[0])]
# list of the average parameter updates
sq_para_avg = [0.0 for _ in range(bounds.shape[0])]
# run the gradient descent
for it in range(n_iter):
# calculate gradient
gradient = derivative(solution[0], solution[1])
# update the average of the squared partial derivatives
for i in range(gradient.shape[0]):
# calculate the squared gradient
sg = gradient[i]**2.0
# update the moving average of the squared gradient
sq_grad_avg[i] = (sq_grad_avg[i] * rho) + (sg * (1.0-rho))
# build a solution one variable at a time
new_solution = list()
for i in range(solution.shape[0]):
# calculate the step size for this variable
alpha = (ep + sqrt(sq_para_avg[i])) / (ep + sqrt(sq_grad_avg[i]))
# calculate the change
change = alpha * gradient[i]
# update the moving average of squared parameter changes
sq_para_avg[i] = (sq_para_avg[i] * rho) + (change**2.0 * (1.0-rho))
# calculate the new position in this variable
value = solution[i] - change
# store this variable
new_solution.append(value)
# evaluate candidate point
solution = asarray(new_solution)
solution_eval = objective(solution[0], solution[1])
# report progress
print('>%d f(%s) = %.5f' % (it, solution, solution_eval))
return [solution, solution_eval]
Note: we have intentionally used lists and imperative coding style instead of vectorized operations for readability. Feel free to adapt the implementation to a vectorization implementation with NumPy arrays for better performance.
We can then define our hyperparameters and call the adadelta() function to optimize our test objective function.
In this case, we will use 120 iterations of the algorithm and a value of 0.99 for the rho hyperparameter, chosen after a little trial and error.
...
# seed the pseudo random number generator
seed(1)
# define range for input
bounds = asarray([[-1.0, 1.0], [-1.0, 1.0)
# define the total iterations
n_iter = 120
# momentum for adadelta
rho = 0.99
# perform the gradient descent search with adadelta
best, score = adadelta(objective, derivative, bounds, n_iter, rho)
print('Done!')
print('f(%s) = %f' % (best, score))
Tying all of this together, the complete example of gradient descent optimization with Adadelta is listed below.
# gradient descent optimization with adadelta for a two-dimensional test function
from math import sqrt
from numpy import asarray
from numpy.random import rand
from numpy.random import seed
# objective function
def objective(x, y):
return x**2.0 + y**2.0
# derivative of objective function
def derivative(x, y):
return asarray([x * 2.0, y * 2.0])
# gradient descent algorithm with adadelta
def adadelta(objective, derivative, bounds, n_iter, rho, ep=1e-3):
# generate an initial point
solution = bounds[:, 0] + rand(len(bounds)) * (bounds[:, 1] - bounds[:, 0])
# list of the average square gradients for each variable
sq_grad_avg = [0.0 for _ in range(bounds.shape[0])]
# list of the average parameter updates
sq_para_avg = [0.0 for _ in range(bounds.shape[0])]
# run the gradient descent
for it in range(n_iter):
# calculate gradient
gradient = derivative(solution[0], solution[1])
# update the average of the squared partial derivatives
for i in range(gradient.shape[0]):
# calculate the squared gradient
sg = gradient[i]**2.0
# update the moving average of the squared gradient
sq_grad_avg[i] = (sq_grad_avg[i] * rho) + (sg * (1.0-rho))
# build a solution one variable at a time
new_solution = list()
for i in range(solution.shape[0]):
# calculate the step size for this variable
alpha = (ep + sqrt(sq_para_avg[i])) / (ep + sqrt(sq_grad_avg[i]))
# calculate the change
change = alpha * gradient[i]
# update the moving average of squared parameter changes
sq_para_avg[i] = (sq_para_avg[i] * rho) + (change**2.0 * (1.0-rho))
# calculate the new position in this variable
value = solution[i] - change
# store this variable
new_solution.append(value)
# evaluate candidate point
solution = asarray(new_solution)
solution_eval = objective(solution[0], solution[1])
# report progress
print('>%d f(%s) = %.5f' % (it, solution, solution_eval))
return [solution, solution_eval]
# seed the pseudo random number generator
seed(1)
# define range for input
bounds = asarray([[-1.0, 1.0], [-1.0, 1.0)
# define the total iterations
n_iter = 120
# momentum for adadelta
rho = 0.99
# perform the gradient descent search with adadelta
best, score = adadelta(objective, derivative, bounds, n_iter, rho)
print('Done!')
print('f(%s) = %f' % (best, score))
Running the example applies the Adadelta optimization algorithm to our test problem and reports performance of the search for each iteration of the algorithm.
Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.
In this case, we can see that a near optimal solution was found after perhaps 105 iterations of the search, with input values near 0.0 and 0.0, evaluating to 0.0.
We can plot the progress of the Adadelta search on a contour plot of the domain.
This can provide an intuition for the progress of the search over the iterations of the algorithm.
We must update the adadelta() function to maintain a list of all solutions found during the search, then return this list at the end of the search.
The updated version of the function with these changes is listed below.
# gradient descent algorithm with adadelta
def adadelta(objective, derivative, bounds, n_iter, rho, ep=1e-3):
# track all solutions
solutions = list()
# generate an initial point
solution = bounds[:, 0] + rand(len(bounds)) * (bounds[:, 1] - bounds[:, 0])
# list of the average square gradients for each variable
sq_grad_avg = [0.0 for _ in range(bounds.shape[0])]
# list of the average parameter updates
sq_para_avg = [0.0 for _ in range(bounds.shape[0])]
# run the gradient descent
for it in range(n_iter):
# calculate gradient
gradient = derivative(solution[0], solution[1])
# update the average of the squared partial derivatives
for i in range(gradient.shape[0]):
# calculate the squared gradient
sg = gradient[i]**2.0
# update the moving average of the squared gradient
sq_grad_avg[i] = (sq_grad_avg[i] * rho) + (sg * (1.0-rho))
# build solution
new_solution = list()
for i in range(solution.shape[0]):
# calculate the step size for this variable
alpha = (ep + sqrt(sq_para_avg[i])) / (ep + sqrt(sq_grad_avg[i]))
# calculate the change
change = alpha * gradient[i]
# update the moving average of squared parameter changes
sq_para_avg[i] = (sq_para_avg[i] * rho) + (change**2.0 * (1.0-rho))
# calculate the new position in this variable
value = solution[i] - change
# store this variable
new_solution.append(value)
# store the new solution
solution = asarray(new_solution)
solutions.append(solution)
# evaluate candidate point
solution_eval = objective(solution[0], solution[1])
# report progress
print('>%d f(%s) = %.5f' % (it, solution, solution_eval))
return solutions
We can then execute the search as before, and this time retrieve the list of solutions instead of the best final solution.
...
# seed the pseudo random number generator
seed(1)
# define range for input
bounds = asarray([[-1.0, 1.0], [-1.0, 1.0)
# define the total iterations
n_iter = 120
# rho for adadelta
rho = 0.99
# perform the gradient descent search with adadelta
solutions = adadelta(objective, derivative, bounds, n_iter, rho)
We can then create a contour plot of the objective function, as before.
...
# sample input range uniformly at 0.1 increments
xaxis = arange(bounds[0,0], bounds[0,1], 0.1)
yaxis = arange(bounds[1,0], bounds[1,1], 0.1)
# create a mesh from the axis
x, y = meshgrid(xaxis, yaxis)
# compute targets
results = objective(x, y)
# create a filled contour plot with 50 levels and jet color scheme
pyplot.contourf(x, y, results, levels=50, cmap='jet')
Finally, we can plot each solution found during the search as a white dot connected by a line.
...
# plot the sample as black circles
solutions = asarray(solutions)
pyplot.plot(solutions[:, 0], solutions[:, 1], '.-', color='w')
Tying this all together, the complete example of performing the Adadelta optimization on the test problem and plotting the results on a contour plot is listed below.
# example of plotting the adadelta search on a contour plot of the test function
from math import sqrt
from numpy import asarray
from numpy import arange
from numpy.random import rand
from numpy.random import seed
from numpy import meshgrid
from matplotlib import pyplot
from mpl_toolkits.mplot3d import Axes3D
# objective function
def objective(x, y):
return x**2.0 + y**2.0
# derivative of objective function
def derivative(x, y):
return asarray([x * 2.0, y * 2.0])
# gradient descent algorithm with adadelta
def adadelta(objective, derivative, bounds, n_iter, rho, ep=1e-3):
# track all solutions
solutions = list()
# generate an initial point
solution = bounds[:, 0] + rand(len(bounds)) * (bounds[:, 1] - bounds[:, 0])
# list of the average square gradients for each variable
sq_grad_avg = [0.0 for _ in range(bounds.shape[0])]
# list of the average parameter updates
sq_para_avg = [0.0 for _ in range(bounds.shape[0])]
# run the gradient descent
for it in range(n_iter):
# calculate gradient
gradient = derivative(solution[0], solution[1])
# update the average of the squared partial derivatives
for i in range(gradient.shape[0]):
# calculate the squared gradient
sg = gradient[i]**2.0
# update the moving average of the squared gradient
sq_grad_avg[i] = (sq_grad_avg[i] * rho) + (sg * (1.0-rho))
# build solution
new_solution = list()
for i in range(solution.shape[0]):
# calculate the step size for this variable
alpha = (ep + sqrt(sq_para_avg[i])) / (ep + sqrt(sq_grad_avg[i]))
# calculate the change
change = alpha * gradient[i]
# update the moving average of squared parameter changes
sq_para_avg[i] = (sq_para_avg[i] * rho) + (change**2.0 * (1.0-rho))
# calculate the new position in this variable
value = solution[i] - change
# store this variable
new_solution.append(value)
# store the new solution
solution = asarray(new_solution)
solutions.append(solution)
# evaluate candidate point
solution_eval = objective(solution[0], solution[1])
# report progress
print('>%d f(%s) = %.5f' % (it, solution, solution_eval))
return solutions
# seed the pseudo random number generator
seed(1)
# define range for input
bounds = asarray([[-1.0, 1.0], [-1.0, 1.0)
# define the total iterations
n_iter = 120
# rho for adadelta
rho = 0.99
# perform the gradient descent search with adadelta
solutions = adadelta(objective, derivative, bounds, n_iter, rho)
# sample input range uniformly at 0.1 increments
xaxis = arange(bounds[0,0], bounds[0,1], 0.1)
yaxis = arange(bounds[1,0], bounds[1,1], 0.1)
# create a mesh from the axis
x, y = meshgrid(xaxis, yaxis)
# compute targets
results = objective(x, y)
# create a filled contour plot with 50 levels and jet color scheme
pyplot.contourf(x, y, results, levels=50, cmap='jet')
# plot the sample as black circles
solutions = asarray(solutions)
pyplot.plot(solutions[:, 0], solutions[:, 1], '.-', color='w')
# show the plot
pyplot.show()
Running the example performs the search as before, except in this case, the contour plot of the objective function is created.
In this case, we can see that a white dot is shown for each solution found during the search, starting above the optima and progressively getting closer to the optima at the center of the plot.
Contour Plot of the Test Objective Function With Adadelta Search Results Shown
Further Reading
This section provides more resources on the topic if you are looking to go deeper.
In this tutorial, you discovered how to develop the gradient descent with Adadelta optimization algorithm from scratch.
Specifically, you learned:
Gradient descent is an optimization algorithm that uses the gradient of the objective function to navigate the search space.
Gradient descent can be updated to use an automatically adaptive step size for each input variable using a decaying average of partial derivatives, called Adadelta.
How to implement the Adadelta optimization algorithm from scratch and apply it to an objective function and evaluate the results.
Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.
Get caught up on the latest technology and startup news from the past week. Here are the most popular stories on GeekWire for the week of April 4, 2021.
Amazon is probably not smiling this morning about an apparent protest movement started by a driver who is placing cardboard boxes upside down during deliveries to make it look like the company’s logo is a frown. … Read More
The death of Kurt Cobain 27 years ago today left fans of Nirvana forever wondering what new music the celebrated grunge band might have produced over the years. … Read More
SpaceX is leasing a 124,907-square-foot building complex that’s under construction in Redmond Ridge Business Park, east of Seattle, according to the latest industrial real estate market report from Kidder Mathews. … Read More
New startup: Vijaye Raji, a tech vet who previously led Facebook’s 5,000-person engineering outpost in Seattle, is CEO and founder of Statsig, a Seattle-area company that just formed to help developers build and launch features quickly. … Read More
The news: T-Mobile made a series of announcements Wednesday as part of its latest ‘Un-carrier’ initiative, including the official launch of its new home internet service, 5G phone offerings, and new investment in rural areas. … Read More
The unionization votes are now in the hands of the National Labor Relations Board and the two factions have completed their challenges to individual ballots in the year-long battle to organize nearly 6,000 Amazon warehouse workers in Bessemer, Ala. The next two days are going to determine which of the two sides has the upper hand: Amazon, which aggressively opposed unionization in the Alabama fulfillment center, or the Retail Wholesale and Department Store Union, which has been trying to convince the plant’s workers to form the retail company’s first unionized employee group. So what happens now? … Read More
Starbucks is launching a new effort to cut down on waste from its ubiquitous beverage cups with a limited trial in Seattle of a “Borrow a Cup” program. … Read More
In the present computer, many parallel application processes are required to execute to run the system. This is now the function of the OS to control & control all the processes efficiently and effectively which is conversely the paramount function of the operating system.
To execute multiple programs multi-programming system is used. This type of operating system is capable of executing more than one program on the CPU. Because of this, the system is completely utilized. If only a single program is being executed and other programs wait for getting their turn.
When a system is running, multiple processes wait to get performed i.e., they wait for their chance to utilize the CPU and start their execution. Sometimes such processes are called jobs. These processes are kept in a job pool. This is done because the main memory is very little to adjust all of these processes together into it. So, they are stored in the job pool which consists of all the processes waiting for allocation in the CPU and main memory. After accommodating these processes together, the CPU selects a job from the waiting list and transfers it from the job pool to the main memory for starting the execution. The processor continues to execute one job unless it is intervened by any external element or it goes for an Input/Output task.
The main components of a multi-programming system are command processor, I/O control system, file system, and transient area.
The parts of the transient area are sub-segmented to store individual programs and further resource management routines are connected with the basic function of the operating system.
When two or more programs are present in computer memory, concurrently, the processor is shared between them and it is called multiprogramming. By organizing jobs in a single shared processor, CPU utilization is increased so that the CPU always has one program to execute. By identifying and executing jobs on behalf of their priority the Response Time is lowered.
While in this type of system, when a job departs the CPU going for other tasks, the CPU becomes inactive i.e., it waits for the next job or till the old one resumes. Overall, the CPU remains idle for some time which is a drawback because more jobs that are delayed to get carried out might not even get odds as the CPU is still allocated to the previous job. This is a problem because the jobs which are prepared to get executed are not assigned to the CPU which is not utilized in its present job. After all, the job is busy taking input/output tasks. To improve the efficiency of the system altogether the theory of multi programming was devised.
Working of Multi-Program System
In this, just as the job proceeds for a task the OS pauses the job and selects an additional job from the job pool, and gives the CPU to the new job to start its processing. The foregoing job continues its input/output task while the new job performs its execution.
Now let’s consider the next job also proceeds for a task the CPU then selects the further next job and starts executing it. And just as any of the previous jobs completes their Input/Output operation they return and the CPU is assigned to it. So, no CPU time is squandered.
Thus, the overall target of a multi programming system is to keep the CPU occupied till some jobs are present in the job pool. Thus, multiple programs can be carried out on a single processor machine & the CPU never remains inactive.
Difference between Multiprogramming and Multiprocessing
Computer systems with two or more than two CPUs (Processor) are called Multiprocessing systems. So, with the availability of multiple processors, multiple processes can be executed at the same time. These multiprocessors work by sharing memory, clock & peripheral devices. A computer system can be both multi-programmed & multi-processing at the same time. The difference between multiprocessing and multi programming is that in multi programming, the system keeps programs in the main memory and executes them using a single CPU only while Multiprocessing means executing multiple processes at the same time on multiple processors. Multi programming is carried out by switching from one process to another while Multiprocessing is carried out by the means of parallel processing.
Advantages of Multiprogramming Operating System
CPU utilization increases and Idle time reduces.
Smart utilization of Resources.
Reduction in response time.
The time required for Short time jobs is reduced.
The system can be used by multiple users at once.
Total read time is reduced while executing a job.
Helps fast monitoring of tasks as they run parallel.
Disadvantages of Multiprogramming System
There is a need for CPU scheduling.
As all types of jobs are stored in main memory, memory management is required.
Log waiting time is required when the system has a large number of pending jobs.
Jarrod Davis has been using Delphi ever since Turbo Pascal 3.03. He registered his application (GameVision Toolkit) to the Delphi 26th Showcase Challenge and asked for his thoughts on using Delphi. More information of his application is on GameVision.
When did you start using RAD Studio/Delphi and have long have you been using it?
I have use every version starting with Turbo Pascal 3.03 back in the day through to the most recent version of Delphi
What was it like building software before you had RAD Studio/Delphi?
I’ve always used Object Pascal/Delphi, but in those times when I had to use a different development tool for whatever reason, I was never nearly as productive as I am using Delphi.
How did RAD Studio/Delphi help you create your showcase application?
I was able to take advantage of my knowledge using Delphi, source code, utilities and libraries I have accumulated over the years.
What made RAD Studio/Delphi stand out from other options?
Object Pascal is just a nice and expressive language for me and the Delphi IDE has all the features for rapid application development
What made you happiest about working with RAD Studio/Delphi?
Ease of use, rapid application development. Everything “just works.”
What have you been able to achieve through using RAD Studio/Delphi to create your showcase application?
Take my version 1.x and add all the features I had been planning in a impressively short period of time.
What are some future plans for your showcase application?
Continue to improve and add features. Thank you, Jarrod! The showcase entry for his software can be found below.
University of Washington professor Margaret O’Mara discusses the history of tech at the 2017 GeekWire Summit. (GeekWire Photo / Dan DeLong)
Amazon warehouse employees voted against unionization in Bessemer, Ala., in a victory for the company and a defeat for organized labor. But the process put a harsh spotlight on labor practices in the tech giant’s fulfillment centers, and the company’s anti-union tactics. The union is challenging the outcome. But even if the results hold, was this really a win for Amazon?
“There are not clear winners and losers here,” said Margaret O’Mara, a historian, author and University of Washington professor who specializes in the history of tech and politics. “There may be some victory laps being run at Amazon right now. But this has opened up a conversation about its labor practices. Amazon plays hardball. That’s part of the secret of its success.”
On this episode of GeekWire’s new Day 2 podcast, O’Mara speaks about the aftermath of the union vote with GeekWire reporter Mike Lewis and our podcast collaborator Jason Boyce, a former Amazon seller who runs the e-commerce agency Avenue7Media and co-authored the book,The Amazon Jungle.
Amazon CEO Jeff Bezos speaking at the Economic Club of Washington, D.C. (Economic Club of Washington, D.C. Photo / Gary Cameron)
Jeff Bezos surprised some Amazon critics and followers this week by throwing his weight behind a federal corporate tax hike to help pay for President Biden’s infrastructure plan. It was a position that seemed out of step with his company’s history and the rest of corporate America.
It raises the question: why are Bezos and Amazon breaking with their peers and supporting an increased corporate federal income tax rate?
Amazon has become emblematic of American corporations that amass staggering profits and are criticized for how much they pay in taxes. Bezos’ statement in support of a corporate tax hike this week appears to undercut that reputation.
His position also sets him apart from the rest of the corporate elite. The Business Roundtable, a powerful CEO association of which Bezos is a member, was quick to rebuke Biden’s plan and warn any tax increase could hurt American competitiveness.
There are several reasons why Amazon is supporting the increased tax rate. In short, it’s because Amazon doesn’t operate like any other company.
Amazon needs infrastructure
The most obvious reason Bezos is backing a tax hike to pay for infrastructure is Amazon, more than any other major tech company, needs infrastructure.
Accessible roads and bridges and a functioning post office are critical to fulfilling Amazon’s promise of delivering packages to customers on time. The large swaths of rural America without a broadband connection are also untapped markets for Amazon. If more people can easily access Amazon.com, more people can buy Amazon products.
An Amazon trailer on I-5 near Orland, Calif., a few miles from the company’s planned Delivery Station in the rural community. (Photo by Chris Kaufman for GeekWire.)
Biden’s American Jobs Plan would spend more than $2 trillion to repair bridges, ports, airports, and transit systems. It pledges to bring high-speed broadband to every American and invest in research and development and technology jobs training.
Those programs could have major benefits for Amazon, but they don’t come cheap. Biden’s plan to fund the infrastructure package includes a 7% increase of the corporate tax rate to 28%. Under the 2017 tax overhaul, the rate was slashed from 35% to 21%. The proposal also seeks to close tax loopholes and discourage American companies from shifting profits offshore to reduce their tax obligations.
It’s not all about income tax
University of Washington professor Margaret O’Mara. (UW Photo)
Since the earliest days at Amazon, the company has kept its federal tax bill low by reinvesting profits back into the company and taking advantage of the research and development tax credit, among other strategies.
For years Amazon reported little or no profits at all. Today, Amazon does post a profit but it’s not nearly as high as many other tech companies. That means a modest increase in the corporate income tax rate might not have a major impact on Amazon.
“When you look at the taxes that Amazon pays or does not pay, as well as other tech companies, it’s not just the corporate tax rate,” said Margaret O’Mara, a University of Washington historian and author of The Code: Silicon Valley and the Remaking of America. “Particularly in the case of Amazon, what’s as important, if not more important, are the other carve-outs in the tax code, notably the R&D credit.”
Jay Carney, Amazon’s policy and communications chief, tweeted this last week:
If the R&D Tax Credit is a “loophole," it's certainly one Congress strongly intended. The R&D Tax credit has existed since 1981, was extended 15 times with bi-partisan support and was made permanent in 2015 in a law signed by President Obama.
“Amazon” has become shorthand among politicians across the aisle for multinational corporations that avoid paying their fair share of taxes, including the current and former president.
When Biden announced plans to rein in corporate tax avoidance to pay for his $2 trillion infrastructure bill last week, he singled out Amazon by name.
“A fireman, a teacher paying 22%, [and] Amazon and 90 other major corporations paying zero in federal taxes,” Biden said. “I’m going to put an end to that.”
Amazon earned Biden’s ire by paying $0 in federal income taxes for two years, according to several outside reports and analyses of the company’s finances. That record changed in 2019, when Amazon paid $162 million in federal income tax on $280.5 billion in total revenue.
In February Amazon said its 2020 tax contributions included about $1.7 billion in federal income tax expense. The company posted revenue of $386 billion last year and income before income taxes of $24.1 billion, boosted by a pandemic-driven surge amid as customers relied on its online shopping and cloud computing services.
Political brownie points
Underscoring Amazon’s support for the tax hike is an increasingly hostile political climate. Amazon has been a convenient punching bag for politicians across the aisle for years, but the criticism has intensified since Democrats took control of Congress and the White House.
Amazon’s diminishing popularity in Washington D.C. was on full display over the past few weeks, when prominent politicians from across the aisle including Biden waded into the battle to unionize Amazon warehouse workers in Bessemer, Ala. A sizable majority of warehouse employees voted against unionization on Friday.
Bezos’ willingness to accept a higher tax obligation to pay for popular infrastructure programs could be an effort to curry favor with the Biden administration and customers at a tenuous time for the company.
“We support the Biden Administration’s focus on making bold investments in American infrastructure,” Bezos wrote in his statement this week. “Both Democrats and Republicans have supported infrastructure in the past, and it’s the right time to work together to make this happen.”
The Centers for Disease Control and Prevention’s V-Safe tracks health status after a COVID-19 vaccination. (GeekWire Photo)
On April 15, anyone in Washington state who is 16 or older can roll up their sleeve and get a shot of COVID-19 vaccine. And once two weeks have passed after either one dose of the Johnson & Johnson vaccine, or a second dose of the Moderna or Pfizer vaccine, that person is now considered fully vaccinated.
And then what?
Vaccinated people can savor the fact that they are now almost certainly protected against getting seriously sick from COVID, let alone needing hospitalization or worse. They’re also contributing to herd immunity, a sought-after, community-wide resilience against the virus that will help shield people who cannot be vaccinated because they’re too young or have health conditions.
But is it a green light for attending that year-delayed gala wedding, hopping a plane to Maui, or raising a glass at a favorite watering hole?
As with all things COVID, the answers are not absolute and are subject to change. And we’re in a tricky spot, racing to deliver shots as the number of positive cases is rapidly growing, with some epidemiologists saying we’re clearly already being swept up in a fourth wave of COVID infections. For his part, Dr. Anthony Fauci, America’s leading infectious disease scientist, hasn’t changed his behavior much post vaccination, and local experts say the same.
With all of that in mind, here’s what we’ve learned about responsible post-vaccine behavior and our path to a more normal existence.
Can I chuck my mask?
(Bigstock Image)
The Centers for Disease Control and Prevention advises vaccinated people to keep wearing masks, avoid crowds and poorly ventilated indoor spaces, and stay six feet apart from others.
But there are exceptions. People can skip the mask when in a home or a private setting with a small group in which everyone is fully vaccinated. Vaccinated people can forgo masks when in a private space with members of one other household that is not vaccinated.
Why mask when vaccinated? While the science is encouraging, there is still a chance that someone who is vaccinated can contract COVID, be asymptomatic and pass it to others.
But the mask habit is starting to slip for some. University of Washington epidemiologist Brandon Guthrie is part of a study that has returned repeatedly to the same mall and store entrances in King County to tally those with and without masks. The researchers have recently noticed a dip in usage among those 65 and older — which is also the population with the highest rates of vaccination.
“It’s absolutely understandable,” he said. “But we really need to keep doing the things that work [in stopping COVID], especially the things that are not that big of a deal, like wearing masks.”
When do we reach herd immunity?
Earlier in the pandemic, experts tossed out vaccination goals that would deliver us to the blissful, mask-free state of herd immunity, often hovering around 75 to 85% of the public. That was then.
“Our ability to understand what that number is went out the window a while ago,” said Dr. Joshua Schiffer, an infectious disease modeler at the Fred Hutchinson Cancer Research Center.
A lot of the blame falls on variants, which are mutations of the original COVID virus. There are five so-called variants of concern in the U.S., each with varying degrees of superpowers that make them more deadly, more infectious, and/or more resistant to vaccines. Those last two traits in particular, in combination with the potential that immunity could decline over time among vaccinated people, make it challenging to set a hard and fast herd immunity target. (There is promising new data showing vaccine protection lasts for a minimum of six months.)
As some countries reach high rates of immunization, researchers will be closely watching to see how variants do or don’t spread, which will give some indication of where herd immunity lies. Israel, for example, is a global leader in vaccination: 60% of the population has received at least one shot, and 55% are fully vaccinated.
A mass vaccination site hosted at Amazon earlier this year. (GeekWire Photo / Taylor Soper)
What’s up with the variants?
It’s in a virus’ nature to keep mutating and evolving, and the versions that have a reproductive advantage will start winning out over other variants. That’s certainly what we’ve seen during the pandemic.
The UK variant, B.1.1.7, has become the dominant form of COVID in the U.S. It’s roughly 50% more infectious, causes more severe infections, but luckily appears to be minimally resistant to current vaccines.
If not for the new variants, we likely could have avoided the fourth wave of infections, said Schiffer. And now state officials are warning that more restrictive rules for businesses in some counties could be implemented soon to try and tamp down the surge.
“It’s frustrating that we have tended to open up the things first that are the highest risk level,” said Guthrie, such as restaurants, bars and gyms, “and open the things that have the most benefit and lowest risk, like schools, last.”
So how do we fight the variants?
Vaccinations are making a difference already, experts said. More than 44% of King County residents age 16 and older have had at least one COVID vaccine shot, which is more than one-third of the entire population in the county that includes Seattle, Bellevue and Redmond.
“Had we not had 40% of our population vaccinated, we would be completely underwater right now,” Schiffer said. “That is having a very strong effect in terms of protecting people.”
And new vaccines are in the works. UW Medicine announced this week that it’s recruiting volunteers for a clinical trial evaluating a “second-generation” COVID-19 vaccine. The second stage of the study will include vaccines made with multiple viral proteins in the hope of boosting protection against variants.
Moderna said this week that it will supply booster shots against the variants by the end of this year.
What about our kids?
As many folks are giddy with visions of life when fully vaccinated, one might forget an important population who can’t yet get their shots: kids 15 and under. Pfizer has the only vaccine approved for 16 and 17 year olds in the U.S., and there are promising early results for the Pfizer and Moderna vaccines for kids as young as 12. The hope is they’ll be able to get the shots before school starts next year. Tests are underway in even younger children, with Moderna running trials in kids 6 months old and up.
In the meantime, while parents might start feeling easier about the risks they face, their younger kids are still vulnerable.
“Psychologically, people can really transfer their own level of concerns or feeling that you are safe onto other people around them, especially when there is this difference in vaccination status,” Guthrie said.
When and how do we get to ‘normal’?
The hope now is that a new, even more problematic variant doesn’t emerge as we race toward widespread vaccination. Research by Schiffer and his team shows that super-spreader events play a key role in launching variants, which argues for keeping crowd sizes to a minimum.
It’s also going to be important to overcome vaccination hesitancy. While we could reach high vaccination rates in some areas, there will likely be population pockets with less protective numbers that could become hotbeds for infections, Guthrie cautioned. Researchers are already noticing lower vaccination rates in parts of Eastern Washington, as reported in the Seattle Times, particularly in more politically conservative areas.
Worldwide vaccination is also essential. As many of the variants have shown, the virus doesn’t respect national borders.
While the risk of infection drops for those who are vaccinated, all of these factors mean that “normal” will return slowly.
“When this pandemic ends, it’s not going to be a sudden end,” Schiffer said. “It’s going to be in fits and starts and a gradual return to normal life.”
Bottom line: Do I book a flight to the Yucatán or Yosemite?
On a more upbeat note, the federal government has eased some travel-related restrictions for those who are vaccinated.
Fully vaccinated people can travel domestically without doing a COVID test before or after a trip, though Hawaii has its own restrictions. For international travel, vaccinated people still need a negative test before returning to the U.S., and the CDC says they “should” get tested 3-to-5 days after returning. Some countries enforce their own testing rules before entering. In the U.S., neither domestic nor foreign journeys require a post-travel quarantine period.
What do you get when you combine Gears of War and Destiny? Outriders is the latest looter-shooter that follows a little too close to formula. Or at least that’s what I thought when I started the game, and then that opinion swiftly changed enough to a point where I’m sort of enjoying it, even if just casually.
Outriders sees humanity flee a dying Earth to a new potential home on planet Enoch. As our merry band of heroes land on the new planet, they notice weird anomalies which turn many of the crew either into ashes or gives them unique superpowers. The game then turns the clocks forward by around 30 years, with the player character finding themselves in the middle of a war between humans and well, aliens.
A little note – I played the game on both PC and a base PS4, with the majority of my endgame spent on the PS4.
The game, developed by Polish studio People Can Fly feels like a fair attempt at getting in on the craze of looter-shooters that have been gobbling up gamers’ attention span over the last few years. It’s not much of a surprise, considering the studio was once under the leadership of Epic Games and helped co-develop Fortnite, arguably the most mass-friendly game in recent history. Outriders goes in a different direction, even with elements lifted from other contemporary looters like Destiny, Warframe or The Division, but it mixes up its core gameplay with some interesting decisions.
This Feels Familiar
What’s that? Bullet Sponge? Yep, that’s in Outriders, but not as much as you’d think.
Let’s get this out of the way – If none of the looter-shooters out there on the market has done it for you, then Outriders won’t do much either. It feels quite derivative but has a few cards up its sleeve to distinguish itself from its peers. For starters, the game feels more welcoming to singular players instead of focusing all of its mechanics on a co-op/party system. Yes, you can invite your friends to form a roster, but Outriders’ story really feels like it’s a job for one man, or woman.
There’s a heavy focus on storytelling, complete with cinematic cutscenes, but its presentation is a little too generic for my taste. ‘We landed on an alien planet and weird stuff is happening’ – how many times have you seen that story play out in games or films?
Outriders Combat looks similar to Gears of War
The inclusion of dialogue choices is a strange one, which doesn’t really affect the story. It’s akin to the dialogue system from games like Horizon Zero Dawn, where it primarily exists to give players the option to get more backstory from characters. That said, the characters here have nothing on Aloy or Ros from HZD, and I can only describe them as ‘generic’, which is something that goes for pretty much everything in this game.
Gameplay – Back To Basics
There are 4 classes in Outriders 0 Technomancer, Pyromancer, Trickster and Devastator. They do exactly what they sound like, similar to class-based systems from other looter-shooters. Each has different special abilities, separated by cooldown meters, and pretty much all of them reward aggressive gameplay styles.
My PC playthrough was with the Pyromancer class, while I chose the Trickster on PS4. The former is a medium-range character class, with the latter being focused on close-range combat. While the game looks like it focuses a lot on cover-based combat, it really doesn’t feel like it once you’ve gone through the initial hours.
Outriders favour aggressive playstyle, with enemies being consistently on top of you, forcing you to use your powers and move around a lot. I like the fact that the health bar is linked to the types of kills I can get, so using abilities is a frequently big part of combat. The types of guns available are par for the course, with the usual range of assault rifles, shotguns and sniper rifles.
Outriders character creator
The character creator is basic, but at least you can change your appearance, bar body type, any time you want. I would’ve preferred a wider variety of physiques to select from, and the current roster looks quite close to something from Gears of War, without the body proportions being cartoonishly large.
‘World Tier’ Levelling System
The ‘world tier’ levelling system gets the job done, providing better loot and harder enemies to beat, if you set it to auto select the highest tier. This is the closest to a difficulty settings menu that you can find in the game, and you can change it on the fly.
Graphics, Visuals & Performance
Considering Outriders is a cross-gen game, I didn’t expect much from its visuals. It looks perfectly fine on any platform. The character and environment designs are perfectly serviceable, and with so much happening on the battlefield, your screen can get a little messy every so often.
Performance on PC is really good, partly thanks to the inclusion of NVIDIA DLSS. If you’ve got an RTX graphics card, then you won’t have to worry about performance at all. If you have something older, even then you probably won’t get too much issue. I played the game on my PC with an AMD Ryzen 7 3700X and a RTX 2060 Super, paired with 32 GB of RAM. While the game did slow down to slightly below 60 FPS in heavy areas like the crowded main hub, it was perfectly fine during intense combat sequences.
Playing #Outriders on a last-gen console is a PAIN. I played about 7 hours on PC, now got started on PS4 and man, 30 FPS and constant loading ssuuuuuuuuuuuucks
Performance on PS4 though is a little rocky. The game targets 30 FPS on the last-gen consoles, and after experiencing the game on PC, my experience on the console was tainted. Add in texture pop-in and constant need for loading before almost any cutscene, and it’s clear that this game needs to be played on better hardware.
Verdict
Under different circumstances, I would recommend players wait a while until the game goes on sale. However, with the game now available on Xbox Game Pass (for console), I think Xbox gamers can go ahead and enjoy the game to their heart’s desire. After all, what have you got to lose? Well, $60/Rs. 3999 if you plan on playing on a PlayStation console, and slightly less on PC. While the game is serviceable, with decent gameplay mechanics, I don’t see it being a Destiny-killer just yet. The good thing is, the developers don’t either, as it really kind of stands in the grey area between a single-player RPG shooter and a co-op online looter.
Yesterday, the infamous Jon Prosser tweeted out that Google has shelved the upcoming Pixel 5a, and Android Central later corroborated it. According to the sources, the next Pixel faced the axe due to the global shortage chip, which has already caused significant players worldwide. But Google has to say otherwise.
In a statement to Android Police, Google spokesperson implicated that the next Pixel smartphone has not been cancelled, and it will be launched in line with the last year’s Pixel 4a. Here’s what the statement reads.
“Pixel 5a 5G is not cancelled. It will be available later this year in the U.S. and Japan and announced in line with when last year’s a-series phone was introduced.“
According to Google, the Pixel 5a is coming indeed. However, it might not be available in every region, as the statement implies that the next affordable Pixel smartphone will be just available in the United States and Japan. According to Google’s statement, it might not come to Europe and India, atleast this year. The reason behind this could be the shortage of silica around the globe, which gives Prosser and Android Central a hand over their report.
Pixel 5a Rumoured Specifications
Pixel 5a Renders
If launched, the Pixel 5a will succeed last year’s Pixel 4a, and it will be an affordable take on Google’s 2020 flagship, Pixel 5. According to some previous rumours, the Pixel 5a is expected to feature a 6.2-inch FHD+ OLED display with no word on its refresh rate. According to Steve Hemmerstoffer renders, the punch hole cutout at the top left corner will house its front camera. The main camera setup is likely to consist of a 12MP primary camera alongside a 16MP ultrawide camera. The Pixel 5a will be retaining the 3.5mm audio jack fingerprint sensor and stereo speakers from its predecessor.
Prosser also shared renders for the alleged Pixel Watch. According to him, Google is working on a smartwatch dubbed Pixel Watch, currently being referenced as ‘Rohan’ internally. The watch is expected to debut later this year in October with Google’s in-house chipset.
We keep adding a lot of great blogs from the LearnCPlusPlus.org website for beginners, new developers in C++ Builder, and professionals. We have another great new C++ Builder post picks from the last week.
Do you want to modernize your C++ applications on Windows? Try Styles in C++ Builder, it comes officially with Rad Studio and very easy to use glomerulus graphics. You can easily use Styles in C++ Builder VCL projects on Windows, or in C++ Builder FMX projects for the Multi-Device Applications. Last week we also started to add more “Introduction to C++” series. Start learning C++ with learning to use input and output commands in C++ Builder. Do you want to learn or remember constants and literals in C++? Do you want to learn about data types and the size of variables?
Professionals, we have also two great pick videos from Cppcon. Do you want to learn how to construct Generic Algorithms in C++? Watch the video by Ben Deane below. Do you want to see what is new about Strings in C++20 standard?
If you are a beginner or want to jump into C++ Builder please visit our LearnCPlusPlus.org website for the great posts from basics to professional examples, full codes, snippets, etc.
Ty Collins (left) and Mike Radenbaugh of Rad Power Bikes win Young Entrepreneur of the Year at the 2019 GeekWire Awards. (GeekWire Photo / Kevin Lisota)
— Rad Power Bikes co-founder Ty Collins has stepped down from the rapidly growing e-bike startup. Collins, who was chief marketing officer, is now in an advisory role and remains close with the company, participating in onboarding and calls with senior leadership.
“I spent six wonderful years building Rad and grinding in the startup lifestyle and was simply just ready to be able to go to the park on a weekday with my wife and kids,” Collins told GeekWire.
The pandemic has spurred huge demand for Rad’s e-bikes. The Seattle-based company is profitable and raised a $150 million round earlier this year.
“As for what’s next, I am dedicating an unknown amount of time to being with my family, I am sure the startup life will suck me back in at some point,” said Collins.
The company’s chief revenue officer, Jed Paulson, will oversee marketing efforts with Collins’ departure.
Ronald Howell. (WRF Photo)
— Washington Research Foundation (WRF) CEO Ron Howell will be retiring at the end of April after 29 years leading one of Washington state’s largest private foundations. WRF CFO Jeff Eby will be acting CEO until a new leader is announced.
“I love that I met so many great innovators and was able to learn about interesting science and engineering, then think creatively about its role, its value, and how we could help,” Howell said in a statement.
Washington Research Foundation was founded in 1981 by Tom Cable, Bill Gates Sr. and W. Hunter Simpson. The organization supports life science and technology through grants, commercialization, and licensing technologies from universities and other nonprofit research institutions. University of Washington, for example, has earned more than $445 million in licensing revenue through WRF.
During Howell’s tenure, the organization expanded from primarily intellectual property management to include grant-making programs and a venture investment arm, WRF Capital. The organization’s assets grew from $13 million to $300 million.
“Under Ron’s leadership, WRF has thrived and dramatically expanded its mission,” said Cable. “…Thanks to Ron, WRF is on very solid footing as it moves forward with a primary focus on the support of life-science-related technologies.”
— Expedia Group added SoftBank deputy general counsel Patricia Menendez-Cambo to its board. She fills a vacancy created by the resignation of longtime board member A. George “Skip” Battle. Read the story.
Leila Kirske. (Marchex Photo)
— Seattle-based sales and marketing analytics company Marchex promoted Leila Kirske as its new CFO. Kirske joined Marchex in late 2020 as SVP of finance and administration.
Prior to that, she was CFO at health tech company 98point6. She has also held executive finance roles at Seattle startups Tune, Simply Measured, and EMC’s Isilon division.
— Saad Syed, the former VP of engineering at Azure Core, has left Microsoft and will join Stripe as head of reliability serves and business continuity.
Syed spent 20 years at Microsoft and was a founding member of Project RedDog, which would go on to become the company’s cloud computing service Microsoft Azure.
Kristin McNelis .(Armoire Photo)
— Clothing rental service Armoire announced Kristin McNelis as its first CMO. McNelis was also the Seattle startup’s first customer when it launched.
Based in Boston, McNelis was most recently a senior director at Drinkworks, a joint venture of AB-INBev and Keurig Dr. Pepper. She was a classmate of Armoire co-founder and CEO Ambika Singh at the MIT Sloan School of Management.
“As Armoire’s first customer in 2016, I intimately understand the unique solution that the clothing membership provides to busy, boss lady women who want to look good wherever their crazy lives take them,” said McNelis.
Discovery Inc.’s former VP of Digital Growth Yasmin Moorman, who is also joining the company as its chief business and operations officer.
Founded by Ryan Hogan and Derrick Smith, the company offers monthly subscription boxes that deliver stories, clues, correspondence, interactive tasks and more in the pursuit of helping to solve a crime. Based in Seattle and Baltimore, Hunt A Killer reported over $50 million in revenue last year and is expanding its digital offerings and delivery formats.
As the battle between Apple and Epic Games rages on, we’re starting to find some interesting details about the business model of both companies’ online storefronts. In a recent court filing by Apple against Epic Games, it has been revealed that the latter lost close to $450 million on Epic Games Store (EGS) in the last two years.
You can access the entire filing here, and also read Epic’s response against Apple regarding their investments and profits, but here’s the gist:
Epic lost around $181 million on EGS in 2019
Epic projected to lose around $273 million on EGS in 2020.
Epic committed $444 million in minimum guarantees for 2020 alone
Epic projects to lose around $139 million in 2021
In return, Epic also revealed its investments and projected earnings through Epic Games Store, stating that they expect the storefront to turn a profit by 2023, as reported by DSOGaming.
The Apple-Epic lawsuit is quickly approaching trial in May, and as both companies prepare to get on the battlegrounds, more information about both storefronts is bound to become public information.
It’s not a surprise to see the projected losses on EGS’ part, considering how much the company has been investing to expand its user base by giving away big games (like GTA V) frequently. We’ve also seen the company bag well-known IP as exclusives for its storefront, which also would require a large investment.
Fortnite – The game that started the fiasco between Apple and Epic Games
Recent developments show that Epic might be planning to go public soon, so that could provide a way to subsidize these investments. Even if you’re not a fan of Epic Games Store, the company has been making the right moves to gain new users, despite lacking basic features available on rival PC storefront Steam.
The Apple-Epic legal battle is set to take place in May, with more information potentially coming in the following weeks.
Matt Ehrlichman, Porch founder and CEO. (Porch Group Photo)
An investment firm with a short position in Porch Group, standing to benefit if its share price falls, released a report questioning the Seattle-based home services software company’s underlying business metrics, communications with investors, and accounting on a variety of fronts.
The report by Ben Axler, Spruce Point Capital Management’s founder and chief investment officer, includes a claim that Porch made conflicting statements to the Securities and Exchange Commission about a May 2019 transaction in which its Porch CEO Matt Ehrlichman purchased Porch shares from home improvement giant Lowe’s Companies Inc., which had been one of Porch’s largest shareholders.
In a response to questions from the SEC, filed in November 2020, the company said the $4 million stock sale by Lowe’s to Ehrlichman underestimated the fair value of Porch stock by more than $33 million, and said that difference qualified as compensation expense for Ehrlichman under Financial Accounting Standards Board rules.
In that filing, Porch said it “concluded that the difference between the purchase price paid by the Porch CEO and the estimated fair value of such shares represents compensation expense.”
Later, in a January 2021 prospectus, Porch said that while it was required to recognize the amount as a compensation expense, the $33 million was “being excluded from the 2020 Summary Compensation Table as Porch was not a party to the transaction and does not view the stock purchase by Mr. Ehrlichman as compensatory.”
The Spruce Point report asks, “By having CEO Ehrlichman purchase Lowe’s shares, and claim that it was substantially below market value, was the Company using this as diversion from taking a goodwill impairment?”
GeekWire contacted Ehrlichman shortly after the report was issued on Thursday morning, offering an opportunity to explain the difference in the SEC filings as noted by the Spruce Point Capital report. A public relations representative for Porch provided a statement on Friday afternoon, addressing the report generally: “The allegations are baseless and misleading, contain significant inaccuracies, and are made by someone who profits if our stock price goes down. We’ve nothing more to add at this time.”
Porch shares closed Friday at $16.80 per share, down from a recent peak of $18 a share on April 1, following the release of its quarterly earnings.
The company went public on the Nasdaq in late December, raising more than $322 million through a merger with PropTech Acquisition Corp., a publicly traded special purpose acquisition corporation, or SPAC, and a private investment from Wellington Management Company.
Dendron founders Kevin Lin, left, and Kiran Pathakota. (Dendron Photos)
New funding: Seattle-based startup Dendron has raised $2 million in seed funding for its open source note-taking tool that helps users manage any amount of information.
The founders:Kevin Lin is a former software engineer at Amazon Web Services who left after five years to launch his own company. The former Geek of the Week went through Y Combinator as a solo founder before being joined by Kiran Pathakota, one of Dendron’s first customers, as co-founder. Pathakota is a former Amazon technical program manager who also spent time at Facebook and Microsoft.
The tech: The way Lin sees it, the big problem with note taking and knowledge management in general is that there’s too much information and no good way of finding what you need when you need it.
“Google organizes the world’s information to make it accessible, but there’s nothing that does that for personal or institutional information,” Lin said. “If search worked in this case, Google Drive and Google Docs should be enough for note taking — but they’re not.”
Dendron is taking the concepts of IDEs (integrated development environments) that let developers uniformly structure, update, and find specific areas of code and applying it to general knowledge. Lin, who has a collection of over 20,000 notes that he manages with Dendron, said his tech makes it possible to “enforce a consistent structure across thousands of documents so that you can always find what you need.”
Monetization: Lin says that Dendron plans to make money by charging teams and enterprises who want access to additional features like single sign-on, private registries and fine-grained access control.
Growth plans: Lin and Pathakota are currently the only two employees, until an intern joins in May. But they’re interviewing for full time engineers and have two potential candidates, one in South Korea and one in Hong Kong. “We are a remote-first company,” Lin said, adding that since Dendron is open source, “we also get regular contributions from the community.”
Final word: “What Excel did for numbers is what Dendron is doing with general information — providing users a framework in which they can organize and manage it at scale,” Lin said.
God of War creator David Jaffe says that Sony PlayStation is working on a rival to the Xbox Game Pass subscription service that should be announced soon.
In a new video posted on his channel, Jaffe claims to have talked to sources at Sony, stating the following:
“What I can tell you is I know they are doing some stuff because I know people at Sony who have told me that they are doing some stuff. There will be a response to Game Pass. What it is, we don’t know.”
While Jaffe left Sony PlayStation back in 2007, it’s clear that he still has enough contacts within the gaming industry (and more importantly, Sony) to back his claims.
Sony has yet to announce a Game Pass alternative officially, but PlayStation boss Jim Ryan did drop some hints regarding it in a previous interview with TASS –
“There is actually news to come, but just not today. We have PlayStation Now which is our subscription service, and that is available in a number of markets.”
Bethesda Games on Xbox Game Pass
Microsoft has been making big moves with regards to Xbox Game Pass, making multi-million dollar deals to secure future exclusives for Xbox players across console and PC. Even former Sony exclusives like MLB The Show are now available on the service, among other third-party games like Outriders. These games still retail for $60-$70 on PlayStation, while Game Pass subscribers can play them for $10-$15 per month.
Even if you’re a diehard PlayStation fan, you have to accept the fact that Xbox Game Pass’ value remains undefeated in the games industry. Sony has made some swift moves recently, giving away some of its treasured PS4 exclusives for free either directly or through PS Plus. Now, with Game Pass showing no signs of stopping, it looks like Sony will have to play its cards sooner rather than later.
— Ghost Recon Breakpoint (@GhostRecon_UK) April 9, 2021
It ain’t much, but it’s honest work (?)
The roadmap can be accessed on a blog page, but that doesn’t tell us much. There are only 2 things on the roadmap, both being title updates:
Title Update 4.0.0 (Spring end) – “Focus on improving the players experience with AI teammates, including a new progression system, added customisation and more features requested by the community.”
Title Update 4.1.0 (Fall 2021) – “Set to be one of the biggest operations so far, will release in the fall.”
Aside from these, there’s not much else to be gleaned from the blog post. Ubisoft elaborated on the first title update a little bit, saying the following about the improvements coming to the AI teams:
“The Teammate Experience Update is focused on improving your experience with your AI squad, while also adding some community requested features. Discover a new XP progression for your AI squad, and unlock new passive skills and abilities as you play. A dedicated quest log will also be available for you to experiment with the new AI squad features while rewarding you with cool and exclusive rewards!”
It’s no secret that Breakpoint was the point where Ubisoft had a complete meltdown (ironic) over its release schedule, delaying all games it had scheduled around this game’s release to later dates. Since then, the French developer has restructured its content delivery slightly, relying slightly less on broken games being pushed out at launch.
While Breakpoint did not fare well in reviews at launch, it has gotten better. This seems to be a running theme with live service games, and the 2021 roadmap should only bring good things for the Breakpoint’s community.
Microsoft threat analysts have been tracking activity where contact forms published on websites are abused to deliver malicious links to enterprises using emails with fake legal threats. The emails instruct recipients to click a link to review supposed evidence behind their allegations, but are instead led to the download of IcedID, an info-stealing malware. Microsoft Defender for Office 365 detects and blocks these emails and protects organizations from this threat.
In this blog, we showcase our analysis on this unique attack and how the techniques behind it help attackers with their malicious goals of finding new ways to infect systems. This threat is notable because:
Attackers are abusing legitimate infrastructure, such as websites’ contact forms, to bypass protections, making this threat highly evasive. In addition, attackers use legitimate URLs, in this case Google URLs that require targets to sign in with their Google credentials.
The emails are being used to deliver the IcedID malware, which can be used for reconnaissance and data exfiltration, and can lead to additional malware payloads, including ransomware.
This threat shows attackers are always on the hunt for attack paths for infiltrating networks, and they often target services exposed to the internet. Organizations must ensure they have protections against such threats.
While this specific campaign delivers the IcedID malware, the delivery method can be used to distribute a wide range of other malware, which can in turn introduce other threats to the enterprise. IcedID itself is a banking trojan that has evolved to become an entry point for more sophisticated threats, including human-operated ransomware. It connects to a command-and-control server and downloads additional implants and tools that allow attackers to perform hands-on-keyboard attacks, steal credentials, and move laterally across affected networks to delivering additional payloads.
We continue to actively investigate this threat and work with partners to ensure that customers are protected. We have already alerted security groups at Google to bring attention to this threat as it takes advantage of Google URLs.
Microsoft 365 Defender defends organizations by using advanced technologies informed by Microsoft Defender for Office 365 and backed by security experts. Microsoft 365 Defender correlates signals on malicious emails, URLs, and files to deliver coordinated defense against evasive threats, their payloads, and their spread across networks.
Microsoft Defender for Office 365 supports organizations throughout an attack’s lifecycle, from prevention and detection to investigation, hunting, and remediation–effectively protecting users through a coordinated defense framework.
Tracking malicious content in contact forms
Websites typically contain contact form pages as a way to allow site visitors to communicate with site owners, removing the necessity to reveal their email address to potential spammers.
However, in this campaign, we observed an influx of contact form emails targeted at enterprises by means of abusing companies’ contact forms. This indicates that attackers may have used a tool that automates this process while circumventing CAPTCHA protections.
Figure 1. Sample contact form that attackers take advantage of by filling in malicious content, which gets delivered to the target enterprises
In this campaign, we tracked that the malicious email that arrives in the recipient’s inbox from the contact form query appears trustworthy as it was sent from trusted email marketing systems, further confirming its legitimacy while evading detection. As the emails are originating from the recipient’s own contact form on their website, the email templates match what they would expect from an actual customer interaction or inquiry.
As attackers fill out and submit the web-based form, an email message is generated to the associated contact form recipient or targeted enterprise, containing the attacker-generated message. The message uses strong and urgent language (“Download it right now and check this out for yourself”), and pressures the recipient to act immediately, ultimately compelling recipients to click the links to avoid supposed legal action.
Figure 2. A sample email delivered via contact forms that contain malicious content added by attackers
Along with the fake legal threats written in the comments, the message content also includes a link to a sites.google.com page to view the alleged stolen photos for the recipient to view.
Clicking the link brings the recipient to a Google page that requires them to sign in with their Google credentials. Because of this added authentication layer, detection technologies may fail in identifying the email as malicious altogether.
After the email recipient signs in, the sites.google.com page automatically downloads a malicious ZIP file, which contains a heavily obfuscated .js file. The malicious .js file is executed via WScript to create a shell object for launching PowerShell to download the IcedID payload (a .dat file), which is decrypted by a dropped DLL loader, as well as a Cobalt Strike beacon in the form of a stageless DLL, allowing attackers to remotely control the compromised device.
The downloaded .dat file loads via the rundll32 executable. The rundll32 executable then launches numerous commands related to the following info-stealing capabilities:
Machine discovery
Obtaining machine AV info
Getting IP and system information
Domain information
Dropping SQLite for accessing credentials stored in browser databases
Contact form email campaign attack chains lead to IcedID malware
The diagram in Figure 3 provides a broad illustration of how attackers carry out these malicious email campaigns, starting from identifying their targets’ contact forms and ending with the IcedID malware payload.
Figure 3. Contact form attack chain results in the IcedID payload
We noted a primary and secondary attack chain under the execution and persistence stages. The primary attack chain follows an attack flow from downloading malicious .zip file from the sites.google.com link, all the way to the IcedID payload. The secondary attack chain, on the other hand, appears to be a backup attack flow for when the sites.google.com page in the primary attack chain has already been taken down.
In the secondary chain, users are redirected to a .top domain, while inadvertently accessing a Google User Content page, which downloads the malicious .ZIP file. Further analysis reveals that the forms contain malicious sites.google.com links that download the IcedID malware.
When run, IcedID connects to a command-and-control server to download modules that run its primary function of capturing and exfiltrating banking credentials and other information. It achieves persistence via schedule tasks. It also downloads implants like Cobalt Strike and other tools, which allow remote attackers to run malicious activities on the compromised system, including collecting additional credentials, moving laterally, and delivering secondary payloads.
Using legal threats as a social engineering tactic
This campaign is not only successful because it takes advantage of legitimate contact form emails, but the message content also passes as something that recipients would expect to receive. This creates a high risk of attackers successfully delivering email to inboxes, thereby allowing for “safe” emails that would otherwise be filtered out into spam folders.
In the samples we found, attackers used legal threats as a scare tactic while claiming that the recipients allegedly used their images or illustrations without their consent, and that legal action will be taken against them. There is also a heightened sense of urgency in the email wording, with phrases such as “you could be sued,” and “it’s not legal.” It’s a sly and devious approach since everything else about this email is authentic and legitimate.
We observed more emails sent by attackers on other contact forms that contain similar wording around legal threats. The messages consistently mention a copyright claim lure by a photographer, illustrator, or designer with the same urgency to click the sites.google.com link.
Figure 4. Samples of contact form emails that use the photographer copyright lure with a sites.gooogle.com link
In a typical contact form, users are required to input their name, email address, and a message or comment. In the samples we obtained, attackers used fake names that start with “Mel,” such as “Melanie” or “Meleena,” and used a standard format for their fake email addresses that include a portion of their fake name + words associated photography + three numbers. Some examples include:
mphotographer550@yahoo.com
mephotographer890@hotmail.com
mgallery487@yahoo.com
mephoto224@hotmail.com
megallery736@aol.com
mshot373@yahoo.com
Defending against sophisticated attacks through coordinated defense
As this research shows, adversaries remain motivated to find new ways to deliver malicious email to enterprises with the clear intent to evade detection. The scenarios we observed offer a serious glimpse into how sophisticated attackers’ techniques have grown, while maintaining the goal of delivering dangerous malware payloads such as IcedID. Their use of submission forms is notable because the emails don’t have the typical marks of malicious messages and are seemingly legitimate.
To protect customers from this highly evasive campaign, Microsoft Defender for Office 365 inspects the email body and URL for known patterns. Defender for Office 365 enables this by leveraging its deep visibility into email threats and advanced detection technologies powered by AI and machine learning, backed by Microsoft experts who constantly monitor the threat landscape for new attacker tools and techniques. Expert monitoring is especially critical in detecting this campaign given the delivery method and the nature of the malicious emails.
In addition, the protection delivered by Microsoft Defender for Office 365 is enriched by signals from other Microsoft 365 Defender services, which detect other components of this attack. For example, Microsoft Defender for Endpoint detects the IcedID payload and surfaces this intelligence across Microsoft 365 Defender. With its cross-domain optics, Microsoft 365 Defender correlates threat data on files, URLs, and emails to provide end-to-end visibility into attack chains. This allows us to trace detections of malware and malicious behavior to the delivery method, in this case, legitimate-looking emails, enabling us to build comprehensive and durable protections, even as attackers continue to tweak their campaigns to further evade detection.
By running custom queries using advanced hunting in Microsoft 365 Defender, customers can proactively locate threats related to this attack.
To locate emails that may be related to this activity, run the following query:
EmailUrlInfo
| where Url matches regex @"\bsites\.google\.com\/view\/(?:id)?\d{9,}\b"
| join EmailEvents on NetworkMessageId
// Note: Replace the following subject lines with the one generated by your website's Contact submission form if no results return initially
| where Subject has_any('Contact Us', 'New Submission', 'Contact Form', 'Form submission')
To find malicious downloads associated with this threat, run the following query:
DeviceFileEvents
| where InitiatingProcessFileName in~("msedge.exe", "chrome.exe", "explorer.exe", "7zFM.exe", "firefox.exe", "browser_broker.exe")
| where FileOriginReferrerUrl has ".php" and FileOriginReferrerUrl has ".top" and FileOriginUrl has_any("googleusercontent", "google", "docs")
As this attack abuses legitimate services, it’s also important for customers to review mail flow rules to check for broad exceptions, such those related to IP ranges and domain-level allow lists, that may be letting these emails through.
We also encourage customers to continuously build organizational resilience against email threats by educating users about identifying social engineering attacks and preventing malware infection. Use Attack simulation training in Microsoft Defender for Office 365 to run attack scenarios, increase user awareness, and empower employees to recognize and report these attacks.
Emily Hacker with Justin Carroll Microsoft 365 Defender Threat Intelligence Team
Twitch streamer Amouranth keeps the camera running while she sleeps, in December of 2020. For every 20 new subscribers she acquired, she promised to stay asleep for an extra hour. (Twitch screenshot)
Twitch hit another viewership record last month, buoyed by hot content like watching people sleep and a roleplaying mod for an eight-year-old sandbox game. It’s possible we’ve all finally gone insane.
This data is from the latest State of the Stream report, which covers what happened in March in the world of livestreaming. It comes courtesy of the Israeli firm StreamElements, which provides tools and services for video-on-demand production, and its analytics partner Rainmaker.gg.
First, the obvious news: Amazon’s livestreaming platform Twitch still can’t stop breaking viewership records. March was the service’s biggest month to date, with more than 2 billion hours watched on the platform, slightly edging out the previous record set in January.
Facebook would be in good shape in almost any other circumstances. It’s just that Twitch is a juggernaut. (StreamElements/Rainmaker.gg)
This is a 105% increase year-over-year for Twitch, which is both more popular and more controversial at once than it’s ever been before. Between its issues with the American music industry and Wednesday’s bizarre announcement that users can face account suspension or deletion for offline activities, Twitch is sailing some uncharted waters.
Facebook Gaming is also hanging in there, with just under 400 million hours of content watched in March. This is a sharp drop from its numbers in January, but a slight increase over February. It’s the same old story for Facebook, which clearly has and is holding onto an audience for its gaming-focused live content, but it’s barely a footnote next to Twitch.
When people will pay to watch you sleep
Ludwig Ahgren, left, was asleep on stream on April 4 when his fiancé, Lauren Dear, accidentally ended his “subathon” a few minutes early. (Twitch screenshot)
One peculiar blip on Twitch’s radar in March came from the sudden rise of “sleep streaming,” where popular streamers film themselves while they’re napping. Two streamers in particular, Ludwig Ahgren and Matthew “Mizkif” Rinaudo, clocked in about 2 million hours watched in March where neither of them were actually conscious at the time.
In Rinaudo’s case, it was largely due to an experiment, where he went to sleep for five hours with Twitch’s media share option on. His subscribers were able to donate and play video clips on his stream while he slept, which naturally meant some of them tried to wake him up with loud noises, but Rinaudo remained asleep. When he woke up, he used part of the $5,500 in donations he’d collected to buy a member of his audience a Nintendo Switch.
Ahgren’s story is a little crazier. On March 14, Ahgren began a stream he called a “subathon,” where he pledged to keep broadcasting for another 20 seconds every time he got a new subscriber on his Twitch channel.
This succeeded beyond his expectations, and Ahgren ended up staying online constantly from March 14 to April 4. Whenever he slept, the camera stayed on, and his audience trolled him by continually digging up more subscribers to force Ahgren to stay broadcasting for even longer.
The broadcast finally ended on April 4 when Ahgren’s fiance Lauren Dear accidentally shut it off. At that point, Ahgren had been online around the clock for almost three weeks, and in so doing, had acquired one of the highest subscriber counts in Twitch’s history. It also boosted him into the No. 2 spot for most-watched streamer of the month, although he still didn’t get even half the numbers of the perennial viewership champion Félix “xQc” Lengyel.
This happens sometimes with Twitch data. Someone goes viral doing a strange thing on stream, like livestreaming themselves while they sleep, and it turns out it’s not actually all that rare an event. (StreamElements/Rainmaker.gg image)
Sleep streaming has apparently been happening on the margins of the Twitch scene for a little while now. Between them, Rinauldo and Ahgren created enough of a sudden spike in sleep-related viewing that it became statistically significant, and analysts subsequently noticed that they were inexplicably not the only people doing it.
In a way, this isn’t that weird. Since the start of the Internet, there’s been a peculiar fascination among users with vicarious looks into other people’s lives. Part of what’s kept the Just Chatting category No. 1 on Twitch for most of the last year amid the pandemic is a desire for simulated human contact, as vloggers hold onto big audience numbers by doing mundane tasks around their homes.
While no one else even came close to Rinaudo’s hours watched, other popular sleep streamers in March included Kaitlyn “Amouranth” Siragusa, a cosplay model who some sources claim was the No. 1 most-watched female streamer on Twitch last month, temporarily dethroning Pokimane; Spanish League of Legends pro Antonio Espinosa; European fashion model Luisa Sax; and Dota 2 pro Yong-min “Febby” Kim.
At time of writing, Kim in particular is “live” but asleep on Twitch. His feed shows him next to an insert of a web browser that shows a Google search for “Twitch sleepers.” He has been streaming constantly for the last two and a half weeks.
The unifying factor among top-10 sleep streamers seems to be that they’re pro gamers with a tendency to pass out at their keyboards, running a round-the-clock marathon to gather an audience, or beauty/modeling-focused broadcasters. They’re obsessed, or they’re good-looking; there’s no real middle ground.
Grand Theft Auto role-playing
It’s a pretty typical month for video games on Twitch, with Riot’s team-based shooter Valorant as the newest game in the top 10. (StreamElements/Rainmaker.gg image)
As noted above, Just Chatting was once again the No. 1 category on Twitch by a significant margin in March. The rest of the service’s top 10 channels were occupied by some of the usual video-game suspects. League of Legends, Fortnite, Call of Duty: Warzone, and Minecraft all scored millions of hours of content watched, as is the status quo.
One anomaly here was 2013’s Grand Theft Auto V. While GTAV has been a consistent favorite on Twitch over the course of the last few years, owing largely to its GTA Online multiplayer mode, this is the first time since 2016, and potentially ever, that it’s been the No. 1 game on Twitch.
That’s only technically accurate, though. The spike in popularity is coming from several high-profile streamers, such as xQc, summit1G, and TimTheTatman exploring the world of GTAV roleplaying.
By installing mods like FiveM, players have set up private multiplayer servers that run GTAV with a unique set of rules. Instead of playing the game as one of its criminal protagonists, a player is instead dropped into the fictional city of Los Santos as one of its countless random bystanders.
This turns GTAV into an open-ended environment for diceless roleplaying, with users taking on the parts of cops, small-time criminals, shop clerks, and gang members. With some of Twitch’s most popular broadcasters jumping into the scene, it’s brought GTAV, or this version of it, to the top of the charts.
While its star has faded since last summer, keep an eye on Among Us next month. The Redmond, Wash.-produced murder mystery game released its long-awaited new map, the Airship, on March 31, which shifts Among Us‘s usual science-fiction setting to a steampunk zeppelin. This was just the excuse that many streamers needed to jump back into the game.
It’s also led to a truly peculiar charity stream on April 6 that pit several high-profile streamers, including Corpse Husband, Valkyrae, and Sykkuno, against Among Us community director Victoria Tran, two members of the cast of Netflix’s Stranger Things, three members of the Roots, and the Tonight Show‘s Jimmy Fallon.
Excerpts from the 50-minute stream are scheduled to air on the Tonight Show next week, so expect a confused phone call or two from your relatives.
In a public defeat for organized labor, a sizable majority of Amazon warehouse employees voted against unionization in Bessemer, Ala. in an election overseen by federal authorities Friday. While the final vote tally isn’t yet confirmed, the retail and cloud-computing giant appeared to win by a margin better than 2-1 of the votes cast.
More than 3,200 fulfillment center workers participated in the vote with the initial count as 1,798 against union representation and 738 in favor. The National Labor Relations Board on Friday morning completed the two-day process of hand counting the ballot at the NLRB offices in Birmingham, Ala. The defeat ended what was the most serious effort to unionize a segment of the workforce in the 27-year-old company that employs 1.2 million people worldwide.
The unionization effort was backed by the Retail, Wholesale Department Store Union. After the preliminary count was announced, RWDSU President Stuart Applebaum immediately said the union would object to the election based on what it claimed was the company’s illegal union-thwarting tactics.
BREAKING: RWDSU is formally filing Objections and ULP charges against Amazon’s blatantly illegal conduct during the @BAmazonUnion election. We won’t rest until workers’ voices are heard fairly under the law–and when they are, we believe they will be victorious. #BAmazonUnion#1upic.twitter.com/blj1tvNOtf
In a news conference Friday following the vote count, Applebaum said that even in defeat, the vote represents a milestone. “Make no mistake about it: This still represents an important moment for working people,” he said. “People should not presume that the results of this vote are in any way a validation of Amazon’s working conditions and the way it treats its employees.”
Amazon managers disagreed, noting that fewer than 16% of employees voted to join the union.
“It’s easy to predict the union will say that Amazon won this election because we intimidated employees, but that’s not true,” an Amazon spokesperson said in a statement. “Our employees heard far more anti-Amazon messages from the union, policymakers, and media outlets than they heard from us. And Amazon didn’t win — our employees made the choice to vote against joining a union.”
At issue in Bessemer and other fulfillment centers isn’t primarily Amazon’s pay rate — slightly more than $15 an hour — but the work rate, lack of sufficient breaks, and employee tracking that some workers find onerous. In the effort to gain employee and political support, RWSDU organizers often cited Amazon’s industry-worst employee turnover rate as evidence.
University of Washington historian and professor Margaret O’Mara said that even though the union lost, it’s important to put the loss into context. The history of the labor movement, she added, is mostly one of numerous losses before each win.
“(The union) is not just going to pick up and go home,” she said. “They have political allies at the national level and even in the White House.”
At a press conference following the count, four Amazon employees who voted against the union said they didn’t need a union to advocate for better workplace conditions. “Amazon is not perfect, there are flaws, but we are committed to correcting those flaws,” Will Stokes, a Bessemer warehouse worker, said to the Washington Post.
The union specifically called out Amazon’s logistical tactics in the weeks leading up to the vote. First, the RWSDU said that Amazon convinced the city of Bessemer to shorten traffic signals around the plant in a cloaked effort to impede union organizers who had been talking to workers stopped at the lights while leaving and entering the plant.
Later, the union also protested Amazon’s communication with the United States Post Office local branch to get a mailbox installed on the plant’s grounds to collect unionization votes. Union officials later said Amazon managers then pushed workers to vote onsite, which they said was an intimidation tactic.
Vermont Sen. Bernie Sanders, a vocal supporter of the unionization campaign, supported the union’s claims. In a tweet posted after the vote count, Sanders said NLRB will be examining Amazon’s tactics.
“It also appears that some of Amazon’s anti-union efforts may have been in violation of NLRB law. And that is something that the union is addressing with the NLRB right now,” he tweeted Friday. “The fact that the company was able to force workers to attend closed-door, anti-union meetings is just one reason as to why we need legislation that finally gives workers a fair chance to win organizing elections.”
Marvel’s Avengers is, by all means, a ‘dead game’, with dwindling player counts across the board, but a new roadmap update might be the spark that lights the fire of happy fans returning.
Prepare to see multiple and to survive a deadly HARM Room.
The Tachyon Anomaly Event will strike in April and the Red Room Takeover in May!
The new roadmap was posted earlier this week, providing a glimpse at what’s to come in the coming months. Here’s the current roadmap of content coming in Marvel’s Avengers:
April – Tachyon Anomaly expansion (New missions)
May – Red Room Takeover expansion (HARM room content drop)
Additional side content with level expansion and new outfits
Improved matchmaking and gear drops
Second half of 2021 – ‘Cosmic Cube‘, ‘Wasteland Patrol‘ and the long awaited ‘War for Wakanda‘ expansion
It’s not a surprise to see Square Enix not dropping a new Avengers War Table video presentation as it did in the weeks leading up to, and the following launch. It’s well known that the game has been in great need of new content and characters, something that Crystal Dynamics hasn’t been able to deliver on the level that was promised. The new roadmap was elaborated upon in a new blog post, which might go unnoticed by many players.
The game was recently added to the PlayStation Now catalogue, which is akin to Microsoft’s xCloud service with its game streaming options. However, with Xbox Game Pass elevating similar games to higher levels, one has to wonder if Square Enix made the wrong move by partnering with PlayStation.
On Steam, Avengers isn’t even in the list of top 100 games by player count, and a quick look at third-party statistics shows a dwindling player base, less than 1000 concurrent players even after the addition of Kate Bishop and Hawkeye. We don’t have access to player counts on PlayStation or Xbox, but those wouldn’t be too far from the trend we’re seeing on PC, and that is the best version of the game currently.
Outriders, a recently released game somewhat similar to Marvel’s Avengers, has been seeing great success in player counts and engagement by getting added to Game Pass. Judging by the replies to the roadmap update on Twitter and Square Enix’s forums, it’s clear that the game has a small, but passionate fan base. We hope that the game really delivers on its promises to those fans sooner rather than later, but with the studio still keeping mum on some of those pre-launch promises (remember Spider-Man?), one has to wonder just how long this game’s lifespan really ends being.
Samsung has launched a new platform that lets iPhone users experience a Galaxy device from a web browser.
When users visit the “iTest” website, they’ll be prompted to add a web application to iPhone’s home screen. Once the experience is launched, it provides an interactive simulation of an Android device, reports AppleInsider.
“You’re about to get a little taste of Samsung, without changing phones. We can’t replicate every function, but you should quickly see that there’s nothing daunting about switching to the other side,” AppleInsider quoted the company as saying.
Users are able to browse the Galaxy Store, change the theme of their Android system and explore other features available on Galaxy devices, the report said. During the experience, there are also simulated phone calls and text messages highlighting different Android features, it added.
A report from earlier in 2021 suggested that brand loyalty was increasing for Apple and decreasing among Android brands. If that data is accurate, it could explain why Samsung is attempting to draw iPhone users in with a nifty feature to “sample the other side.”
The feature was first spotted by a MacRumors reader, who noted that it’s currently being promoted in New Zealand. Despite the .nz url, the experience is available in other countries too.
Were you happy for being not on Facebook? Those sunny days are now over, as LinkedIn has become the latest victim of the data breach, and user data of around 500 million users are now up for sale.
Courtesy of CyberNews, an individual is selling user data on hacker forums, including information such as users’ full names, email addresses, phone numbers, workplace information, account IDs, links to other social media accounts. The alleged hacker is asking for a mere four-digit amount in US dollars, preferably paid up in bitcoins.
Confirming the breach, “We have investigated an alleged set of LinkedIn data that has been posted for sale and have determined that it is actually an aggregation of data from a number of websites and companies. It does include publicly viewable member profile data that appears to have been scraped from LinkedIn. This was not a LinkedIn data breach, and no private member account data from LinkedIn was included in what we’ve been able to review,” LinkedIn said in a statement.
In the meantime, Italy’s privacy watchdog has begun an investigation against the alleged data breach since the breach includes the highest number of European users. The privacy regulators have also issued a warning to European users to be aware of the suspicious activities concerning their mobile numbers and bank accounts.
According to experts, the leaked data could be used for phishing attacks. As per Security analyst Paul Prudhomme, the breach could turn out to be a nightmare for corporations around the globe.
The Linkedin incident comes after the Facebook data breach, where the personal data of 533 million Facebook users were put on sale on a hacking forum.
You can use Have I Been Pwned to check if your data has leaked in the breach, and I will advise you to subscribe to the service to get notified whenever your email or phone number gets leaked in a data breach.
Samsung on Friday launched its all-new Smart Monitor for the India market at a starting price of Rs 28,000.
The Smart Monitor is available in two models — the M7 that supports Ultra-High Definition (UHD) resolution in 32-inch screen size and the M5 that supports full HD (FHD) resolution in 32-inch and 27-inch screen sizes.
“At Samsung, we believe in bringing impactful innovations and our new Smart Monitor is an example of that,” Puneet Sethi, Vice President, Consumer Electronics Enterprise Business, Samsung India, said in a statement.
“Consumers no longer have to choose between different screens for varied uses as Smart Monitor brings it all together and offers the flexibility to smoothly transition from working and learning to entertaining oneself,” Sethi added.
The new monitor provides numerous connectivity options for both PCs and smartphones, the company said.
Users can connect their personal mobile devices with just a simple tap using Tap View, App Casting, Screen Mirroring or Apple AirPlay2.
The monitor also comes with in-built Netflix, YouTube, Apple TV and other OTT apps.
For home office and learning, the Smart Monitor operates Microsoft 365 applications without a PC thanks to embedded Wi-Fi, allowing users to view, edit and save documents on the cloud directly from the monitor, with help from their Bluetooth connected keyboard and mouse.
Remote access allows users to wirelessly and remotely access files from a PC or view content from a laptop.
Samsung Smart Monitor is now available on Samsung Shop, Amazon and leading retail stores.
Map implementations in Java represent structures that map keys to values. A Mapcannot contain duplicate keys and each can at most be mapped to one value. The Map<K,V> implementations are generic and accept any K (key) and V (value) to be mapped.
The Map interface also includes methods for some basic operations (such as put(), get(), containsKey(), containsValue(), size(), etc.), bulk operations (such as putAll() and clear()) and collection views (such as keySet(), entrySet() and values()).
The most prominent Map implementations used for general purposes are: HashMap, TreeMap and LinkedHashMap.
In this article, we'll take a look at how to filter a Map by its keys and values:
Here, we've gone through the entrySet() of the employeeMap, and added each employee into a LinkedHashMap via its put() method. This would work the exact same for the HashMap implementation, but it wouldn't preserve the order of insertion:
Filtered Map: {35=Mark, 40=John, 31=Jim}
Filtering out by values boils down to much the same approach, albeit, we'll be checking the value of each entry and using that in a condition:
This is the manual way to filter a map - iterating and picking the desired elements. Let's now take a look at a more readable, and friendlier way - via the Stream API.
Stream.filter()
A more modern way to filter maps would be leveraging the Stream API from Java 8, which makes this process much more readable. The filter() method of the Stream class, as the name suggests, filters any Collection based on a given condition.
For example, given a Collection of names, you can filter them out based on conditions such as - containing certain characters or starting with a specific character.
Filter a Map by Keys with Stream.filter()
Let's leverage the Stream API to filter out this same map given the same condition. We'll stream() the entrySet() of the map, and collect() it back into a Map:
What this code does is much the same as what we did manually - for each element in the map's set, we check if their key's value is greater than 30 and collect the values into a new Map, with their respective keys and values supplied through the getKey() and getValue() method references:
Filtered map: {35=Mark, 40=John, 31=Jim}
Filter a Map by Values with Stream.filter()
Now, let's populate another map, and instead of a <Integer, String> key-value pair, we'll use a <String, String> pair:
This time around, we've got city-country pairs, where keys are individual cities, and the values are the countries they're located in. Values don't have to be unique. Kyoto and Tokyo, which are both unique keys can have the same value - Japan.
Sorting this map by values, again, boils down to much the same approach as before - we'll simply use the value, via the getValue() method in the filtering condition:
Now, this results in a filtered map that contains both Tokyo and Kyoto:
Filtered map: {Tokyo=Japan, Kyoto=Japan}
You can get creative with the outputs and results here. For example, instead of putting these elements into a new map and returning that - we can manipulate the resulting value into other structures as well. For example, we could filter out the keys what have Japan and Serbia as values, and join the keys into a single String:
Here, we've used a different Collector than before. The Collectors.joining() returns a new Collector that joins the elements into a String. Other than the CharSequence delimiter we've passed in, we could've also supplied a CharSequence prefix and CharSequence suffix for each joined element.
This results in a String, with all of the filtered elements, separated by a ,:
Filtered map: Belgrade, Tokyo, Kyoto
Conclusion
In this article, we've taken a look at how to filter a Map in Java. We've first gone over how to use enhanced for-loops for pre-Java 8 projects, after which we've dived into the Steam API and leveraged the filter() method.
Filtering maps by either values or keys is rendered into a simple, one-liner task with the help of the Stream API, and you have a wide variety of Collectors to format the output to your liking.
Sony is reportedly developing a remake of The Last of Us for PS5, with Naughty Dog overseeing the project.
The report, coming from Bloomberg, shows that the idea for the remake was conceived by the smaller Visual Arts Service Group, but Sony moved the project back to franchise creator Naughty Dog. The smaller group has since been disbanded, the report claims. The idea for the remake was being moved around during the development for The Last of Us Part 2, so Sony could potentially sell both games in a combo edition for PS5.
Sony is clearly not done with The Last of Us franchise, with the upcoming multiplayer ‘Factions’ game as well as an HBO show in development. With a new remake of the classic game, it might be a great way to generate more interest in the franchise from new players, especially as Naughty Dog isn’t working on a third entry just yet.
The Last of Us Part 2
The Last of Us and The Last of Us Part 2 are now available to play on PS4 and PS5 via backwards compatibility.
Facebook is withholding certain job ads from women because of their gender, according to the latest audit of its ad service.
The audit, conducted by independent researchers at the University of Southern California (USC), reveals that Facebook’s ad-delivery system shows different job ads to women and men even though the jobs require the same qualifications. This is considered sex-based discrimination under US equal employment opportunity law, which bans ad targeting based on protected characteristics. The findings come despite years of advocacy and lawsuits, and after promises from Facebook to overhaul how it delivers ads.
The researchers registered as an advertiser on Facebook and bought pairs of ads for jobs with identical qualifications but different real-world demographics. They advertised for two delivery driver jobs, for example: one for Domino’s (pizza delivery) and one for Instacart (grocery delivery). There are currently more men than women who drive for Domino’s, and vice versa for Instacart.
Though no audience was specified on the basis of demographic information, a feature Facebook disabled for housing, credit, and job ads in March of 2019 after settling several lawsuits, algorithms still showed the ads to statistically distinct demographic groups. The Domino’s ad was shown to more men than women, and the Instacart ad was shown to more women than men.
The researchers found the same pattern with ads for two other pairs of jobs: software engineers for Nvidia (skewed male) and Netflix (skewed female), and sales associates for cars (skewed male) and jewelry (skewed female).
The findings suggest that Facebook’s algorithms are somehow picking up on the current demographic distribution of these jobs, which often differ for historical reasons. (The researchers weren’t able to discern why that is, because Facebook won’t say how its ad-delivery system works.) “Facebook reproduces those skews when it delivers ads even though there’s no qualification justification,” says Aleksandra Korolova, an assistant professor at USC, who coauthored the study with her colleague John Heidemann and their PhD advisee Basileal Imana.
The study supplies the latest evidence that Facebook has not resolved its ad discrimination problems since ProPublica first brought the issue to light in October 2016. At the time, ProPublica revealed that the platform allowed advertisers of job and housing opportunities to exclude certain audiences characterized by traits like gender and race. Such groups receive special protection under US law, making this practice illegal. It took two and half years and several legal skirmishes for Facebook to finally remove that feature.
But a few months later, the US Department of Housing and Urban Development (HUD) levied a new lawsuit, alleging that Facebook’s ad-delivery algorithms were still excluding audiences for housing ads without the advertiser specifying the exclusion. A team of independent researchers including Korolova, led by Northeastern University’s Muhammad Ali and Piotr Sapieżyński , corroborated those allegations a week later. They found, for example, that houses for sale were being shown more often to white users and houses for rent were being shown more often to minority users.
Korolova wanted to revisit the issue with her latest audit because the burden of proof for job discrimination is higher than for housing discrimination. While any skew in the display of ads based on protected characteristics is illegal in the case of housing, US employment law deems it justifiable if the skew is due to legitimate qualification differences. The new methodology controls for this factor.
“The design of the experiment is very clean,” says Sapieżyński, who was not involved in the latest study. While some could argue that car and jewelry sales associates do indeed have different qualifications, he says, the differences between delivering pizza and delivering groceries are negligible. “These gender differences cannot be explained away by gender differences in qualifications or a lack of qualifications,” he adds. “Facebook can no longer say [this is] defensible by law.”
The release of this audit comes amid heightened scrutiny of Facebook’s AI bias work. In March, MIT Technology Review published the results of a nine-month investigation into the company’s Responsible AI team, which found that the team, first formed in 2018, had neglected to work on issues like algorithmic amplification of misinformation and polarization because of its blinkered focus on AI bias. The company published a blog post shortly after, emphasizing the importance of that work and saying in particular that Facebook seeks “to better understand potential errors that may affect our ads system, as part of our ongoing and broader work to study algorithmic fairness in ads.”
“We’ve taken meaningful steps to address issues of discrimination in ads and have teams working on ads fairness today,” said Facebook spokesperson Joe Osborn in a statement. “Our system takes into account many signals to try and serve people ads they will be most interested in, but we understand the concerns raised in the report… We’re continuing to work closely with the civil rights community, regulators, and academics on these important matters.”
Despite these claims, however, Korolova says she found no noticeable change between the 2019 audit and this one in the way Facebook’s ad-delivery algorithms work. “From that perspective, it’s actually really disappointing, because we brought this to their attention two years ago,” she says. She’s also offered to work with Facebook on addressing these issues, she says. “We haven’t heard back. At least to me, they haven’t reached out.”
In previous interviews, the company said it was unable to discuss the details of how it was working to mitigate algorithmic discrimination in its ad service because of ongoing litigation. The ads team said its progress has been limited by technical challenges.
Sapieżyński, who has now conducted three audits of the platform, says this has nothing to do with the issue. “Facebook still has yet to acknowledge that there is a problem,” he says. While the team works out the technical kinks, he adds, there’s also an easy interim solution: it could turn off algorithmic ad targeting specifically for housing, employment, and lending ads without affecting the rest of its service. It’s really just an issue of political will, he says.
Christo Wilson, another researcher at Northeastern who studies algorithmic bias but didn’t participate in Korolova’s or Sapieżyński’s research, agrees: “How many times do researchers and journalists need to find these problems before we just accept that the whole ad-targeting system is bankrupt?”
It’s been a busy week for Clearview AI, the controversial facial recognition company that uses 3 billion photos scraped from the web to power a search engine for faces. On April 6, Buzzfeed News published a database of over 1,800 entities—including state and local police and other taxpayer-funded agencies such as health-care systems and public schools—that it says have used the company’s controversial products. Many of those agencies replied to the accusations by saying they had only trialed the technology and had no formal contract with the company.
But the day before, the definition of a “trial” with Clearview was detailed when nonprofit news site Muckrock released emails between the New York Police Department and the company. The documents, obtained through freedom of information requests by the Legal Aid Society and journalist Rachel Richards, track a friendly two-year relationship between the department and the tech company during which time NYPD tested the technology many times, and used facial recognition in live investigations.
The NYPD has previously downplayed its relationship with Clearview AI and its use of the company’s technology. But the emails show that the relationship between them was well developed, with a large number of police officers conducting a high volume of searches with the app and using them in real investigations. The NYPD has run over 5,100 searches with Clearview AI.
This is particularly problematic because stated policies limit the NYPD from creating an unsupervised repository of photos that facial recognition systems can reference, and restrict the use of facial recognition technology to a specific team. Both policies seem to have been circumvented with Clearview AI. The emails reveal that the NYPD gave many officers outside the facial recognition team access to the system, which relies on a huge library of public photos from social media. The emails also show how NYPD officers downloaded the app onto their personal devices, in contravention of stated policy, and used the powerful and biased technology in a casual fashion.
Clearview AI runs a powerful neural network that processes photographs of faces and compares their precise measurement and symmetry with a massive database of pictures to suggest possible matches. It’s unclear just how accurate the technology is, but it’s widely used by police departments and other government agencies. Clearview AI has been heavily criticized for its use of personally identifiable information, its decision to violate people’s privacy by scraping photographs from the internet without their permission, and its choice of clientele.
The emails span a period from October 2018 through February 2020, beginning when Clearview AI CEO Hoan Ton-That was introduced to NYPD deputy inspector Chris Flanagan. After initial meetings, Clearview AI entered into a vendor contract with NYPD in December 2018 on a trial basis that lasted until the following March.
The documents show that many individuals at NYPD had access to Clearview during and after this time, from department leadership to junior officers. Throughout the exchanges, Clearview AI encouraged more use of its services. (“See if you can reach 100 searches,” its onboarding instructions urged officers.) The emails show that trial accounts for the NYPD were created as late as February 2020, almost a year after the trial period was said to have ended.
We reviewed the emails, and talked to top surveillance and legal experts about their contents. Here’s what you need to know.
NYPD lied about the extent of its relationship with Clearview AI and the use of its facial recognition technology
The NYPD told BuzzFeed News and the New York Post previously that it had “no institutional relationship” with Clearview AI, “formally or informally.” The department did disclose that it had trialed Clearview AI, but the emails show that the technology was used over a sustained time period by a large number of people who completed a high volume of searches in real investigations.
In one exchange, a detective working in the department’s facial recognition unit said, “App is working great.” In another, an officer on the NYPD’s identity theft squad said that “we continue to receive positive results” and have “gone on to make arrests.” (We have removed full names and email addresses from these images; other personal details were redacted in the original documents.)
Albert Fox Cahn, executive director at the Surveillance Technology Oversight Project, a nonprofit that advocates for the abolition of police use of facial recognition technology in New York City, says the records clearly contradict NYPD’s previous public statements on its use of Clearview AI.
“Here we have a pattern of officers getting Clearview accounts—not for weeks or months, but over the course of years,” he says. “We have evidence of meetings with officials at the highest level of the NYPD, including the facial identification section. This isn’t a few officers who decide to go off and get a trial account. This was a systematic adoption of Clearview’s facial recognition technology to target New Yorkers.”
Further, NYPD’s description of its facial recognition use, which is required under a recently passed law, says that “investigators compare probe images obtained during investigations with a controlled and limited group of photographs already within possession of the NYPD.” Clearview AI is known for its database of over 3 billion photos scraped from the web.
NYPD is working closely with immigration enforcement, and officers referred Clearview AI to ICE
The documents contain multiple emails from the NYPD that appear to be referrals to aid Clearview in selling its technology to the Department of Homeland Security. Two police officers had both NYPD and Homeland Security affiliations in their email signature, while another officer identified as a member of a Homeland Security task force.
“There just seems to be so much communication, maybe data sharing, and so much unregulated use of technology.”
New York is designated as a sanctuary city, meaning that local law enforcement limits its cooperation with federal immigration agencies. In fact, NYPD’s facial recognition policy statement says that “information is not shared in furtherance of immigration enforcement” and “access will not be given to other agencies for purposes of furthering immigration enforcement.”
“I think one of the big takeaways is just how lawless and unregulated the interactions and surveillance and data sharing landscape is between local police, federal law enforcement, immigration enforcement,” says Matthew Guariglia, an analyst at the Electronic Frontier Foundation. “There just seems to be so much communication, maybe data sharing, and so much unregulated use of technology.”
Cahn says the emails immediately ring alarm bells, particularly since a great deal of law enforcement information funnels through central systems known as fusion centers.
“You can claim you’re a sanctuary city all you want, but as long as you continue to have these DHS task forces, as long as you continue to have information fusion centers that allow real-time data exchange with DHS, you’re making that promise into a lie.”
Many officers asked to use Clearview AI on their personal devices or through their personal email accounts
At least four officers asked for access to Clearview’s app on their personal devices or through personal emails. Department devices are closely regulated, and it can be difficult to download applications to official NYPD mobile phones. Some officers clearly opted to use their personal devices when department phones were too restrictive.
Clearview replied to this email, “Hi William, you should have a setup email in your inbox shortly.”
Jonathan McCoy is a digital forensics attorney at Legal Aid Society and took part in filing the freedom of information request. He found the use of personal devices particularly troublesome: “My takeaway is that they were actively trying to circumvent NYPD policies and procedures that state that if you’re going to be using facial recognition technology, you have to go through FIS (facial identification section) and they have to use the technology that’s already been approved by the NYPD wholesale.” NYPD does already have a facial recognition system, provided by a company called Dataworks.
Guariglia says it points to an attitude of carelessness by both the NYPD and Clearview AI. “I would be horrified to learn that police officers were using Clearview on their personal devices to identify people that then contributed to arrests or official NYPD investigations,” he says.
The concerns these emails raise are not just theoretical: they could allow the police to be challenged in court, and even have cases overturned because of failure to adhere to procedure. McCoy says the Legal Aid Society plans to use the evidence from the emails to defend clients who have been arrested as the result of an investigation that used facial recognition.
“We would hopefully have a basis to go into court and say that whatever conviction was obtained through the use of the software was done in a way that was not commensurate with NYPD policies and procedures,” he says. “Since Clearview is an untested and unreliable technology, we could argue that the use of such a technology prejudiced our client’s rights.”
As covid vaccines roll out in a handful of countries, the next question has become: How do people prove they’ve been inoculated? For months, this conversation—and the ethical questions any “vaccine passport” system would raise—has been theoretical, but over the last few weeks, efforts have become more concrete. Australian airline Qantas started running a trial in March, while New York launched the first state-level system in the US last week. And on April 5, the UK said it would conduct a pilot as part of its gradual easing of lockdown restrictions. The moves have prompted various reactions: some states in the US have endorsed the concept; others have banned it.
What is a vaccine passport?
When experts talk about turning proof of vaccination into a credential or passport, there are usually two very different reasons they’re put forward.
Proof at international borders. You’d pull this out for immigration authorities when entering another country, mirroring how international vaccine records [pdf] have typically worked for decades—many nations already recommend vaccinations for entry, or require proof of immunizations for diseases such as yellow fever.
Proof for around town. This kind of credential would get more day-to-day use, and it is the one most people are discussing when they talk about vaccine passports. Experts envision that you might show this to enter the building you work in, go to a cafe, or attend a private event such as a concert or wedding.
In either case, the pass might come in one of two forms. It might be stored on your smartphone, or you might carry a piece of paper that could be scanned or displayed. Systems would typically work with either proof of vaccination or a recent negative test. The UK’s early-stage pilot will reportedly also allow proof of recent infection, which would lend a person immunity.
Who’s developing products?
In most places, despite all the recent conversation, vaccine passports haven’t materialized, but many countries and private companies continue to forge ahead. Airlines are talking about an industry-wide solution, for example. As far as countries go, Israel’s version of a vaccine credential is one of the furthest along. Its “green pass” launched in February.
With so many players, software companies have been jockeying for months to become the go-to solution for vaccine credentials. Some are beginning to join up with each other to agree on some common standards. For instance, New York’s system, the Excelsior Pass, uses IBM’s Digital Health Pass. IBM is also a member of Linux Foundation Public Health, an organization that helps hundreds of developers share code and ideas.
But even with increased cooperation, there’s still a lot to sort out. A few big questions about vaccine passports are still on the table.
How will developers keep private health information secure?
New York’s app promises privacy but doesn’t explain how that’s accomplished, says security researcher Albert Fox Cahn, who directs the Surveillance Technology Oversight Project based in New York. He says, “We don’t even have the most rudimentary information about what data it captures, how that data is stored, or what security measures are being used.” Cahn says that he tried an “ethical hacking” exercise: he got permission to try activating a user’s pass simply by inputting details (like birth date) found on social media accounts. He says, “It took me 11 minutes before I had their blue Excelsior Pass.”
For Israel’s green pass, some security experts have already outlined concerns about the outdated encryption being used.
Paper, smartphone, or both?
Requiring people to use a smartphone would exclude significant portions of the population, including many older people and some who cannot afford or choose not to use high-end phones. New York’s pass system—currently in a pilot phase for selected big venues—says that a paper card would be acceptable proof, and that other states’ records or negative test results should also work. That sort of flexibility is part of other proposed systems, too. The PathCheck initiative, run by MIT associate professor Ramesh Raskar, is working on a system that uses paper cards with QR code stickers attached. Codes can be scanned by venues or anyone who wants to vet people entering a space. Other solutions, he says, are too heavy-handed. “People are trying to build business models on top of it,” he says. Instead, he says, “we need a mass-use solution right away, in the middle of a pandemic.”
How does immunization data get stored and shared?
In some countries with nationalized health systems, like the UK and Israel, immunization records can be made centrally accessible. In the US, however, a universal solution faces another major hurdle: the country’s fractured health-care system. Vaccine records are stored in a patchwork of databases that don’t normally work together.
“It’s a jumble,” says Jenny Wanger, who oversees covid-related initiatives for Linux Foundation Public Health. “This is all just a sign of how massively underfunded our public health infrastructure has been for so many years.”
The US’s disconnected system stands in stark contrast to countries like India, where data is much more centralized, says Anit Mukherjee, of the US think tank Center for Global Development. There, he says, “there is no way that we can manage a rollout of a vaccine for one billion people without having some form of centralized system.”
What about the ethics of requiring vaccine proof?
While the benefits to those who are able to use vaccine passports are clear—they will be able to return to something resembling normal life—there are legitimate concerns about the ways in which digitized data will be used, today and in the future. Points to keep an eye on:
Access could be unfairly limited for some people. The vast majority of shots received so far—84%, according to the New York Times—have been given in wealthier countries. And even in those countries, certain groups of workers haven’t been prioritized—US nail salon technicians, for example, have been low priority despite facing high rates of infection. In Israel, distribution to Palestinians in the occupied territories remains slow. For those without a vaccination record, vaccine passports will require proof of a recent negative test, which could cost time or money to obtain.
Laws and policies will need to spell out protections. Imogen Parker leads covid technology work at the Ada Lovelace Institute in London, which has been studying vaccine passports and surrounding ethical issues since May 2020. She says that when it comes to day-to-day use, “there has to be real clarity about how this interacts with equalities legislation, employment law … Could this be used at protests? Could this be used at voting booths?” In the US, she says, that information could also pipe to insurance companies, unless such uses are specifically prohibited.
Countries could use credentials as a way to keep people out. For border crossing, Parker says, the complication is that not all countries have vaccines yet: “Is this going to encourage [countries] to spread vaccines? Is travel and trade predicated on vaccine status?” Mukherjee, meanwhile, points out that not all vaccines are equal. For example, some studies suggest China’s CoronaVac has an efficacy of around 50%, lower than the rates of 90% and higher shown by the Pfizer-BioNTech and Moderna vaccines. Does this mean even those with the “wrong” vaccinations could end up being rejected?
What does the road ahead look like?
With so many questions still to be answered, the stakes for getting it right remain high. In a slide deck obtained by the Washington Post, federal officials worried that a botched rollout “could hamper our pandemic response by undercutting health safety measures, slowing economic recovery, and undermining public trust and confidence.” Since then, the Biden administration has said that it will not issue a nationwide mandate.
But despite the recent media coverage, political takes, and new app launches, it’s not clear what the long-term outlook for vaccine credentials might be. In the short run, they might become a sort of nudge for the hesitant, encouraging them to get their shots in order to open doors that would otherwise remain (literally) closed.
“Our intention is to open as many places as possible with the green pass,” said Israel’s health ministry’s director for health, Sharon Alroy-Preis, in an interview with the Israeli news website Ynet. “The goal is to create places that are safer, and to encourage vaccination.”
But after that? Experts don’t know yet—and even Israel is still figuring it out. The clearest answer isthat, for at least a brief window of time, in certain places, people may need to prove that they’re inoculated or free of covid. Whether or not these systems stick around, and how people will feel about that, is as hard to predict as the course of the pandemic.
Even if the future is murky, though, Parker says that having a sense of the long view is important: “You’re building a tool for health surveillance and normalizing a number of third parties requesting or requiring individuals to share data. There’s a really big question of how that could evolve.” On the other hand, she says, if this is temporary, “do we have the ability to dismantle it?”
Bioethicist Arthur Caplan, founding head of the Division of Medical Ethics at NYU School of Medicine, says that he’s seen how norms around vaccinations can change and evolve. He recalls his push to require health-care professionals to get flu shots and says that after initial debate, the controversy died down: “Some people said, I’m not doing it, I hate it. After about two years of that? Nobody cares. They just do it.”
And in any case, ending the pandemic relies on multiple factors, not just one kind of technology, says Julie Samuels, who helped launch New York’s exposure notification app last year. As with all tech related to the pandemic, she says, “it’s important to think of these things as just a layer of protection … Obviously the most important thing is to get as many people vaccinated as possible.”
Data engineers at Purdue University are using a wealth of connected vehicle data to help improve highway safety and efficiency while laying the groundwork for the ultimate edge device, the autonomous vehicle.
In an effort to put “data in the driver’s seat,” researchers at Purdue’s College of Engineering have created an Autonomous and Connected Systems Initiative that seeks to advance the Internet of Things, robotics and autonomy applications. The effort is being supplemented by a new graduate-level course on the application of machine learning to autonomous vehicles.
The foundation of the data initiative is the estimated 12 billion connected-vehicle data points collected across the state of Indiana in a month. “It is big data,” noted Darcy Bullock, a civil engineering professor and director of Purdue’s Joint Transportation Research Program
Among the goals of the Purdue effort is using its data engineering initiative to forge collaboration between state agencies building new highway infrastructure and auto makers producing connected vehicles and, eventually, autonomous vehicles. The huge data sets provide the ingredients for transportation planning as the connected vehicle evolves into an autonomous people mover.
The Hoosier State spends about $2 billion annually on highway infrastructure, trailing only Michigan in terms of automotive GDP. Until now, state highway departments and car makers “have never really talked,” Bullock said in an interview. “Both really need each other now.”
The common denominator are data recorded in connected vehicles that can be used for everything from real-time traffic updates to gauging the condition of pavement and lane markers.
“From a civil engineering perspective, we need to know from the auto manufacturers what we need to do to build the next generation of roads,” Bullock said.
Hence, Purdue’s connected car initiative is attempting to organize data collected by car makers to help transportation planners keep traffic moving while preparing for the day when autonomous vehicles become a reality.
The initial focus on connected vehicles is driven by the growing amounts of data collected by “black box” systems that record data on everything from speed to hard-braking. (Such a system was used to determine that excessive speed on a winding round that contributed to golfer Tiger Woods’ February crash near Los Angeles.)
Data engineers have come to view connected vehicles as the ultimate edge device, generating loads of data about traffic patterns and hazards that could be used to inform transportation planning.
Purdue researchers focused on connected vehicles and gradual autonomy are currently using large sets of anonymized data for testing in controlled environments, including the university’s research into unmanned vehicles and autonomous agriculture. In one use case, machine learning algorithms were used to program drones that mapped car crash scenes.
The edge computing challenge focuses on prioritizing connected data, Bullock said. Data analysts must decide “what’s important in real time, and what’s information we can process at the edge in the car and maybe transmit it at 2 a.m.,” he explained.
Connected car data like pavement conditions can perhaps be sent once a day. “What we really need to know is what’s going on out there on the interstate at any given time because those are conditions where we can make tactical decisions,” he added.
(Bullock used a traffic dashboard visualization to show us precisely where and when he was stuck in a U.S. Interstate 65 traffic jam on his way to the Purdue campus to be interviewed for this story.)
In this interstate traffic visualization, green is good, red is brake lights. (Source: Purdue University)
Purdue’s efforts are being supplemented by curriculum changes designed to train the next generation of data engineers. The university’s data science initiative, for instance, focuses on data applications and “fluency.”
Along with auto maker, Purdue is also working with Google, using its BigQuery platform to accelerate analysis of connected and autonomous vehicle data sets.
Bullock sees more opportunities to scale Purdue’s crowd-sourcing model. “When one considers that most modern cars have a large collection of sensors that can provide this feedback, we must find ways to effectively and quickly share data between manufacturers and agencies in a manner that does not compromise privacy,” he told U.S. lawmakers considering transportation infrastructure legislation.
Companies are adopting AI solutions at unprecedented rates, but ethical worries continue to dog the roll outs. While there are no established standards for AI ethics, a common set of guidelines is beginning to emerge to help bridge the gap between ethical principles and the AI implementations. Unfortunately, a general hesitancy to even discuss the problem could slow efforts to find a solution.
As the AI Ethics Chief for Boston Consulting Group, Steve Mills talks with a lot of companies about their ethical concerns and their ethics programs. While they’re not slowing down their AI rollouts because of ethics concerns at this time, Mills says, they are grappling with the issue and are searching for the best way to develop AI systems without violating ethical principles.
“What we continue seeing here is this gap, what we started calling this the responsible AI gap, that gap from principle to action,” Mills says. “They want to do the right thing, but no one really knows how. There is no clear roadmap or framework of this is how you build an AI ethics program, or a responsible AI program. Folks just don’t know.”
As a management consulting firm, Boston Consulting Group is well positioned to help companies with this problem. Mills and his BCG colleagues have helped companies develop AI programs. Out of that experience, they recently came up with a general AI ethics program that others can use as a framework to get started.
It has six parts, including:
Empower Responsible AI Leadership – Appoint a leader who will take responsibility and give her a team;
Develop principles, policies, and training – These are the core principles that will guide AI development;
Establish human and AI governance – The system for reviewing adherence to principles and for participants to voice concerns;
Conduct Responsible AI reviews – Build or buy a tool to conduct reviews of AI systems at scale;
Integrate tools and methods – Directly imbuing ethical AI considerations into the AI tools and tech;
Build a test-a-response plan – The system for responding to lapses in principles and for testing. You can read more about BCG’s six-part plan here.
The most important thing a company can do to get started is to appoint somebody to be responsible for the AI ethics program, Mills says. That person can come from inside the company or outside of it, he says. Regardless, he or she will need to be able to drive the vision and strategy of ethics, but also understand the technology. Finding such a person will not be easy (indeed, just finding AI ethicists let alone executives who can take this role is no easy task).
“Ultimately, you’re going to need a team. You’re not going to be successful with just one person,” Mills says. “You need a wide diversity of skill sets. You need bundled into that group the strategists, the technologists, the ethicists, marketing–all of it bundled together. Ultimately, this is really about driving a culture change.”
There are a handful of companies that have taken a leadership role in paving the way forward in AI ethics. According to Mills, the software companies Microsoft, Salesforce, and Autodesk, as well as Spanish telecom Telefónica, have developed solid programs to define what AI ethics means to them and developed systems to enforce it within their companies.
“And BCG of course,” he says, “but I’m biased.”
Rooting Out Bias at Salesforce
As the Principal Architect of the Ethical AI Practice at Salesforce, Kathy Baxter is one of the foremost authorities on AI ethics. Her decisions impact how Salesforce customers approach the AI ethical quandary, which in turn can impact millions of end users around the world.
So you might expect Baxter to say that Salesforce’s algorithms are bias-free, that they always make fair decisions, and never take into account factors based on controversial data.
You would be mistaken.
“You can never say that a model is 100% bias free. It’s just statistically not possible,” Baxter says. “If it does say that there is zero bias, you’re probably overfitting your model. Instead, what we can say is that this is the type of bias that I looked for.”
To prevent bias, model developers must be conscious of the specific types of bias they’re trying to prevent, Baxter says. That means, if you’re looking to avoid identity bias in a sentiment analysis model, for example, then you should be on the lookout for how different terms, such as Muslim, feminist, or Christian, affect the results.
(Vitalii Vodolazskyi/Shutterstock)
Other biases to be on the lookout for are gender bias, racial bias, and accent or dialect bias, Baxter says. Emerging best-practices for AI ethics demands that practitioners devise ways to detect specific types of bias that could impact their particular AI system, and to take steps to counter those biases.
“What type of bias did you look for? How did you measure it?” Baxter tells Datanami. “And then what was the score? What is the actual safe or acceptable threshold of bias for you to say this is good enough to be released in the world?”
Baxter’s is a more nuanced, and practical, view of AI ethics than one might get from textbooks (if there are any on the topic yet). She seems to recognize that you should accept from the outset that bias is everywhere in human society, and that it can never be fully eradicated. But we can hopefully eliminate the worst type of biases and still enable companies and their customers to reap the rewards that AI promises in the first place.
“You often hear people say, Oh we should follow the Hippocratic Oath that says do no harm,” Baxter says. “Well, that’s not actually the true application in medical or pharmaceutical industry, because if you said ‘no harm,’ there would be no medical treatment. You could never do surgery because you’re doing harm to the body when you’re cutting the body open. But the benefits outweigh the risks of doing nothing.”
There are ethical pitfalls everywhere. For example, it’s not just bad form to make business decisions based on the race or ethnicity of somebody–it’s also illegal. But the paradox is, unless you collect data about race or ethnicity, you don’t know if those factors are sneaking into the model somehow, perhaps through a proxy like ZIP Codes.
“You want to be able to run a story and see, are the outcomes different based on what someone’s races is, or based on what someone’s gender is?” Baxter says. “If it is, that’s a real problem. If you just say ‘No, I don’t even want to look at race, I’m just going to completely exclude that,’ then it’s very difficult to create fairness through unawareness.”
‘Sea of Vagueness’
The challenge is that this is all fairly new, and nobody has a solid roadmap to follow. Salesforce is working to build processes in Einstein Discovery to help its customers model data without incorporating negative bias, but even Salesforce is flying blind to a certain extent.
Kathy Baxter, Principal Architect of the Ethical AI Practice at Salesforce
The lack of established standards and regulations is the biggest challenge in AI ethics, Baxter says. “Everyone is working in kind of a sea of vagueness,” she says.
She sees similarities to how the cybersecurity field developed in the 1980s. There was no security at first, and we all got hit by malware and viruses. That ultimately prompted the creation of a new discipline with new standards to guide its development. That process took years, and it will take years to hash out standards for AI ethics, she says.
“It’s a game of whack a mole in security. I think it’s going to be similar to AI,” she says. “We’re in this period right now where we’re developing standards, were developing regulations and it will never be a solved problem. AI will continue evolving, and when it does, new risks will emerge and so we will always be in a practice. It will never be a solved problem, but [we’ll continue] learning and iterating. So I do think we can get there. We’re just in an uncomfortable place right now because we don’t have it.”
AI ethics is a new discipline, so don’t expect perfection overnight. A little bit of failure isn’t the end of the world, but being open enough to discuss failures is a virtue. That can be tough to do in today’s volatile public environment, but it’s a critical ingredient to make progress, BCG’s Mills says.
“What I try to tell people is no one has all the answers. It’s a new area. Everyone is collectively learning,” he says. “The best thing you can do is be open and transparent about it. I think customers appreciate that, particular if you take the stand of, ‘We don’t have all the answers. Here are the things we’re doing. We might get it wrong sometimes, but we’ll be honest with you about what we’re doing.’ But I think we’re just not there yet. People are hesitant to have that dialog.”
The Hans Rosling Center for Population Health at the University of Washington in Seattle. (Kevin Scott Photos)
A lot can change in the landscape during a year in which social isolation is the norm. At the University of Washington, the Hans Rosling Center for Population Health has been completed and photographs show a striking new addition to the Seattle campus.
The 300,000-square-foot building is the physical home to the UW’s Population Health Initiative, launched in 2016 as a collaborative effort to address the intersection of human health, environmental resilience and social and economic equity.
The building was funded primarily by a $210 million gift from the Bill & Melinda Gates Foundation and $15 million in earmarked funding from the Washington State Legislature. It’s named for for Swedish physician Hans Rosling, who has inspired the Gateses with his “rigorous analysis of the true state of the world and his passion for improving heath,” according to the UW.
The Rosling Center is home to the Institute for Health Metrics & Evaluation (IHME), an independent population health research center at UW Medicine, whose projections have informed policymakers during the COVID-19 pandemic.
Designed by The Miller Hull Partnership and built by Lease Crutcher Lewis, construction on the eight-story Rosling Center at the corner of 15th Avenue Northeast and Northeast Grant Lane began in April 2018. It features a variety of office types organized as a collection of neighborhoods containing flexible spaces, which can be transformed to meet changing needs.
The boldest side of the building faces west, with physically static, 3-foot-deep glass fins that Miller-Hull says provide “a sense of energetic movement for pedestrians and act as a shaded canvas for changing light conditions throughout the day.”
The university previously said it planned to invest approximately $1.1 million in artwork for the building, roughly split at about $85,000 from public funds and the remainder from private donors
The building is currently accessible for faculty, staff, and students, but like the rest of the UW campus is still operating under Phase 2 guidance, which strongly encourages telework and remote learning. The hope is to return to largely in-person instruction in the fall or as the state’s Healthy Washington plan allows.
ReadySub, a Seattle-based maker of a cloud-based scheduling platform for school districts, has been acquired by Tyler Technologies, a company that provides software and technology services to the public sector. Terms of the deal were not disclosed.
ReadySub works with approximately 1,000 school districts across the U.S. while Tyler Technologies has 2,000 school districts as clients.
David Vail and Vince Zanella founded ReadySub, which has 10 full-time employees that will join Tyler Technologies’ Schools organization and work remotely.
When did you start using RAD Studio/Delphi and have long have you been using it?
I started using Delphi from the version, which was presented in 1995 in Orlando, Florida, at the Borland conference at that time. Over time I used versions 2, 3, 4, 5, 6 and 7, the latter being the best in my opinion. There were new versions but I did not test them. I got funding for an academic project and bought the Seattle version. Without a doubt, the development and evolution of Delphi represents an extraordinary work. Being able to program with practically the same code for Linux, Mac Os, Pc and Android makes it, in my opinion, one of the best RAD development tools. I had the opportunity to go to more than one Borland convention. There I met David I, one of Delphi’s most enthusiastic programmers. Later, I even had the opportunity to interview Anders Heilsberg, the creator of Turbo Pascal and the Delphi compiler.
What was it like building software before you had RAD Studio/Delphi?
The idea of visual and non-visual components makes programming much more effective. The fact of dedicating more to solving the problem that we have already using components that do the routine tasks, is without a doubt one of the most attractive things about Delphi. In addition, for years Delphi has maintained the open source philosophy and there is a lot of source code, components and tools, which can be used very easily. For my PhD thesis I developed a program that uses a series of open source components that solves a significant number of problems for the results I needed to obtain.
How did RAD Studio/Delphi help you create your showcase application?
Portraits using Craps is a program that creates images with dice. In May 2020, I wrote about a dice image created by cyber artist Barbara Lynn Helman. Apparently the creator put the dice according to the shade of gray that she visually found in each bit of the image. The photographs he submitted seem to indicate this. However, visually making a box made with dice like this would have been too complicated a task and probably too easy to make mistakes. I want to assume that Barbara used some program that told her which die to put in which position. This would be, in any case, the smart way to do this task. So I wrote a program that precisely generates images with dice, like the ones Miss Lynn Herman does. In fact, the program is a modified version of other software that I wrote (for a Digital Image Processing university course), which allows making images with halftones, which seeks to simulate shades of gray for printing black and white photographs (see Computer Graphics. Principles and Practice in C, James D. Foley, Andries van Dam, Steven K. Feiner, John F. Hughes, Addison-Wesley, 1995; chapter 13.1.2 Halftone Approximation). I quickly got a program that generated the final images, putting virtual dice (dice images), instead of putting real dice on a flat surface.
What made RAD Studio/Delphi stand out from other options?
I think that Delphi was my first choice because I had started with Turbo Pascal from a very young age and I simply continued with the improvements that Delphi offered, which at first could be considered as “Turbo Pascal for Windows”. I never really thought of other options. Delphi already gave advantages with the graphical interface and the visual components.
What made you happiest about working with RAD Studio/Delphi?
Its visual components, the huge library of third-party components, the possibility of accessing the source code, and the speed of compilation. All of these factors seem fundamental to me in my decision to use and continue to use Delphi.
What have you been able to achieve through using RAD Studio/Delphi to create your showcase application?
I think cyber artist Barbara Lynn Helman did some interesting graphic work using dice to create her images. However, she wanted to give the impression that the creation of her dice images of her was done manually. Thinking that this can be easily solved with an app, I programmed it thinking that Miss Lynn Helman actually uses some software for it. In any case the credit for the original idea goes to Barbara Lynn Helman. Therefore, being able to replicate the work of Lynn Helman, which is achieved with a Delphi program, demonstrates the ability of the programming language to solve these types of tasks.
What are some future plans for your showcase application?
576 / 5000 Resultados de traducción One possibility for the application of images with dice is to be able to change the type of objects that can be used to create images. For example, one of the simplest ideas is to add dominoes or playing cards. There are many possibilities in that regard. I have the impression that there are many other artists to “help” with this type of program, for example, Ken Knowlton (https://www.kenknowlton.com/), who has made many images with various objects, seashells , rocks of different sizes and shapes, and even symbols used in electronics.
Thank you, Manuel! The showcase entry for his software can be found below.
Amazon appears well on its way to thwarting an attempt to unionize workers at its Bessemer, Ala. plant with the pro-union vote falling well behind the anti-union tally on Thursday as counting was suspended for the evening.
With 1,100 workers choosing against representation by the Retail, Wholesale Department Store Union compared to 463 in favor of unionization, the attempt to hand Amazon its first unionized group of workers appears to be falling well short of success.
Approximately 1,630 votes remain to be counted Friday by the National Labor Relations Board which is overseeing the vote.
Stuart Appelbaum, president of the RWDSU, blamed a “broken” system that stacks the deck against labor and in favor of large corporations.
“Amazon took full advantage of that, and we will be calling on the labor board to hold Amazon accountable for its illegal and egregious behavior during the campaign,” he said Thursday. “But make no mistake about it; this still represents an important moment for working people and their voices will be heard.”
The final count should be completed by tomorrow midday. The union is expected to appeal the vote.
More than 3,200 workers participated in the unionization vote, approximately 55% of the eligible voters. The election was hand-counted and broadcast live through Zoom from the NLRB offices in Birmingham, Ala.
More than 5,800 Amazon workers in Alabama could have been affected by the push organize under the RWDSU, an 80-year-old organization led by Applebaum since 1998. Voting ended a week ago.
University of Washington professor Margaret O’Mara, a historian who has written extensively about the tech industry, said the outcome thus far isn’t a surprise. “It’s in keeping with what you’d expect,” she said Thursday after the counting halted for the day. “Unions have tried before in the tech industry and they’ve not gone anywhere.
“It’s a pretty decisive skew. It would have been a surprise to see this succeed.”
O’Mara said employees view the modern workplace as mostly a short-term relationship, not a career. So while they might have been willing to fight for better wages with the expectation of 30-years at U.S. Steel, today’s employee might not be so inclined to fight when facing the prospect of only a year at an Amazon fulfillment center.
“The dynamics that propelled mass unionization are different now,” she said.
An Amazon spokesperson didn’t respond to a request for comment.
The fight has pulled in Amazon’s top brass including founder Jeff Bezos, numerous U.S. senators, President Joe Biden, and tech industry leaders who say that this fight isn’t simply about the online retail and cloud computing giant but also about the nature of modern work itself.
Today, the Washington Post reported that Amazon managers in the days leading up to the election pushed the U.S. Postal Service to install a mailbox at the Bessemer warehouse — a move the union sees as a violation of labor laws.
According to the Post: “The union has complained about the mailbox, which the Postal Service installed just prior to the start of mail-in balloting for the union election in early February. It’s argued that the mailbox could lead workers to think that Amazon has some role in collecting and counting ballots, which could influence their votes.”
The New York Times noted that generally union elections are held in person but the pandemic changed all of that. Workers received their ballots in February and were due at the Birmingham NLRB office by March 30. During the past several days, the board has attempted to determine ballots are eligible and which are not.
Original updates:
Updated 4:10 p.m.: The count is suspended for the day. The anti-union faction is winning big with 1100 votes against the union and 463 in favor. Approximately 1630 votes left to be tallied.
Updated at 3:40 p.m.: The day is drawing to a close. The gap is wide. Now at 914 votes against and 400 in favor of unionization. The ratio tomorrow must completely reverse to make it close enough to require the counting of the challenged ballots.
Updated at 3:15 p.m.: Counting has resumed. Same ratio. Amazon management pulled another 100 votes before the union supporters tallied an additional 50. Now 801 against compared to 349 in favor.
Updated at 3:05 p.m.: The NLRB staff proposed counting for one more hour and then continuing tomorrow at 8:30 AM CST. Both sides agreed. The count is at 700 against and 301 in favor.
Updated at 2:45 p.m.:Still trending strongly against forming a union in Bessemer: 698 against vs. 300 in favor of organizing. Currently, the NLRB staff is taking a 15-min. break after counting ballots for nearly three hours.
The company folks who oppose the union are winning handily at this point with no evidence that the gap is going to narrow. Witnesses and labor officials have not decided how late they will tally Thursday night in Birmingham. There are approximately 2,200 votes still uncounted. Both sides have agreed to continue for a while.
Updated at 2:15 p.m.: Amazon management is winning at a clip better than 2-1.
Updated at 2:05 p.m. Now at 440 votes against a union; 200 votes in favor (unofficial).
Update at 1:55 p.m. Amazon management extends its lead. Now at 400 votes against a union; 183 votes in favor (unofficial).
Update at 1:45 p.m. 300 votes against a union; 145 votes in favor.
Update at 1:30 p.m. 200 votes against a union; 101 votes in favor.
Update at 1:25 p.m. It is 196 No votes and 100 Yes, still nearly 2-1 against forming a union.
Update at 1 p.m. The No union side has 100 votes to the 39 for Yes. (unofficial)
Update, 12:55 p.m.: At my count — unofficial — the “No” union side is winning handily so far.
Update, 12:50 p.m.: 43 No; 15 Yes.
Update, 12:45 p.m. Nearly 3-to-1 for No so far.
Update, 12:42 p.m. PT: First three votes No. Then two yes.
CoinFlip, a supplier of cryptocurrency ATM machines, is bringing the ease of bitcoin transactions to locations across Washington state.
The Chicago-based company is among a handful of operators who are flooding gas stations, grocery stores and elsewhere in the U.S. with the machines which allow users to buy and sell digital currency. Reuters reported on the growing phenomenon driven by the popularity around bitcoin.
There are currently 32,305 bitcoin ATMs across the country, according to a website that answers just that — howmanybitcoinatms.com. CoinFlip has 1,958 and is growing.
CoinFlip’s Washington machines will be located in Seattle, Everett, Vancouver, Yakima (two each); Spokane and Spokane Valley (three); and Olympia, Aberdeen, Tumwater, Battle Ground and Puyallup (one each).
CoinFlip is bootstrapped and employs 182 people. The company makes money from transaction fees. It charges a 6.99% fee for buying and selling cryptocurrencies across all of its ATMs.
Seattle-based startup Coinme, meanwhile, announced that it has brought its business to Florida with the launch of more than 300 bitcoin-enabled kiosks at select grocery outlets across the state. The company, which partners with Coinstar, has nearly 6,000 machines in 45 states.
Semi-supervised learning is a learning problem that involves a small number of labeled examples and a large number of unlabeled examples.
Learning problems of this type are challenging as neither supervised nor unsupervised learning algorithms are able to make effective use of the mixtures of labeled and untellable data. As such, specialized semis-supervised learning algorithms are required.
In this tutorial, you will discover a gentle introduction to the field of semi-supervised learning for machine learning.
After completing this tutorial, you will know:
Semi-supervised learning is a type of machine learning that sits between supervised and unsupervised learning.
Top books on semi-supervised learning designed to get you up to speed in the field.
Additional resources on semi-supervised learning, such as review papers and APIs.
Let’s get started.
What Is Semi-Supervised Learning Photo by Paul VanDerWerf, some rights reserved.
Tutorial Overview
This tutorial is divided into three parts; they are:
Semi-Supervised Learning
Books on Semi-Supervised Learning
Additional Resources
Semi-Supervised Learning
Semi-supervised learning is a type of machine learning.
It refers to a learning problem (and algorithms designed for the learning problem) that involves a small portion of labeled examples and a large number of unlabeled examples from which a model must learn and make predictions on new examples.
… dealing with the situation where relatively few labeled training points are available, but a large number of unlabeled points are given, it is directly relevant to a multitude of practical problems where it is relatively expensive to produce labeled data …
As such, it is a learning problem that sits between supervised learning and unsupervised learning.
Semi-supervised learning (SSL) is halfway between supervised and unsupervised learning. In addition to unlabeled data, the algorithm is provided with some super- vision information – but not necessarily for all examples. Often, this information will be the targets associated with some of the examples.
We require semi-supervised learning algorithms when working with data where labeling examples is challenging or expensive.
Semi-supervised learning has tremendous practical value. In many tasks, there is a paucity of labeled data. The labels y may be difficult to obtain because they require human annotators, special devices, or expensive and slow experiments.
The sign of an effective semi-supervised learning algorithm is that it can achieve better performance than a supervised learning algorithm fit only on the labeled training examples.
Semi-supervised learning algorithms generally are able to clear this low bar expectation.
… in comparison with a supervised algorithm that uses only labeled data, can one hope to have a more accurate prediction by taking into account the unlabeled points? […] in principle the answer is ‘yes.’”
Finally, semi-supervised learning may be used or may contrast inductive and transductive learning.
Generally, inductive learning refers to a learning algorithm that learns from labeled training data and generalizes to new data, such as a test dataset. Transductive learning refers to learning from labeled training data and generalizing to available unlabeled (training) data. Both types of learning tasks may be performed by a semi-supervised learning algorithm.
… there are two distinct goals. One is to predict the labels on future test data. The other goal is to predict the labels on the unlabeled instances in the training sample. We call the former inductive semi-supervised learning, and the latter transductive learning.
This book is aimed at students, researchers, and engineers just getting started in the field.
The book is a beginner’s guide to semi-supervised learning. It is aimed at advanced under-graduates, entry-level graduate students and researchers in areas as diverse as Computer Science, Electrical Engineering, Statistics, and Psychology.
In this paper, we provide a comprehensive overview of deep semi-supervised learning, starting with an introduction to the field, followed by a summarization of the dominant semi-supervised approaches in deep learning.
The move to cloud is happening faster than ever before and organizations are increasing their dependency on cloud storage services. In fact, Microsoft Azure Storage services are one of the most popular services in the cloud. Companies need effective threat protection and mitigation strategies and tools in place as they manage their access to cloud storage. For example, Azure Defender treats data-centric services as part of the security perimeter and provides prioritization and mitigation of threats for Storage. To help you build a framework, we examined the attack surface of storage services. In this blog, we outline potential risks that you should be aware of when deploying, configuring, or monitoring your storage environment.
Methodology
Within cloud storage services, we witness users sharing various file types, such as Microsoft Office and Adobe files, and attackers taking advantage of this to deliver malware through email. Moreover, use cases of cloud storage go beyond internal interfaces, with business logic being shared with third parties. Therefore, the Azure Defender for Storage security team has mapped the attack surface undertaken by leveraging Storage service.
This post reflects our findings based on the MITRE ATT&CK® framework, which is a knowledge base for tactics and techniques employed in cyberattacks. MITRE matrices have become an industry standard and are embraced by organizations aiming to understand potential attack vectors in their environments and to ensure they have adequate detections and mitigations in place.
While analyzing the security landscape of storage, and applying the same methodology we defined for Kubernetes, we noticed the resemblance and differences across techniques. Whilst Kubernetes underlies an operating system, its threat matrix is structured like MITRE matrices for Linux or Windows. Aiming to address the entire attack surface for storage, from data loss prevention (DLP) and sensitive content exposure to uncovering malicious content distribution over a file share Server Message Block (SMB), we adjusted the enterprise tactics to fit a data service.
The threat matrix stages
We expect this matrix to dynamically evolve as more threats are discovered and exploited, and techniques can also be deprecated as cloud infrastructures constantly progress towards securing their services. Below we will address each of the threat matrix stages in more detail.
Figure 1: Threat matrix for Storage.
Stage 1: Reconnaissance
Adversaries are trying to gather information they can use to plan future operations. Reconnaissance consists of techniques that involve actively or passively gathering information that can be used to support targeting.
Storage account discovery: Adversaries may enumerate storage account names (or leverage an existing enumeration process) to find an active storage account. Examples of such methods can vary from search dorks (site:*.blob.core.windows.net) to brute-force account creations. Adversaries can also employ crawler results or leverage public toolkits, such as Microburst and BlobHunter.
Public containers discovery: Adversaries may enumerate container names (or leverage an existing enumeration process) for an already known storage account. Adversaries can employ crawler results or leverage public toolkits, such as Microburst and BlobHunter.
Stage 2: Initial access
Adversaries are trying to get into your network. Initial access consists of techniques that use various entry vectors to gain their initial foothold within a network. Footholds gained through initial access may allow for continued access, like valid accounts and use of external remote services, or may be limited use due to changing passwords or keys.
Valid SAS URI: A shared access signature (SAS) is a uniform resource identifier (URI) that grants restricted access rights to storage resources. Adversaries may steal a SAS URI using one of the Credential Access techniques or capture a SAS URI earlier in their reconnaissance process through social engineering to gain initial access. Adversaries may also leverage identity and access management (IAM) privileges to generate a valid SAS offline based on a stolen storage account key.
Valid access key: Adversaries may steal an access key using one of Credential Access techniques or capture one earlier in their reconnaissance process through social engineering to gain initial access. Adversaries may leverage keys left in source code or configuration files. Sophisticated attackers may also obtain keys from hosts (virtual machines) that have mounted File Share on their system (SMB).
Valid Azure Active Directory (Azure AD) principal: Adversaries may steal account credentials using one of the Credential Access techniques or capture an account earlier in their reconnaissance process through social engineering to gain initial access. An authorized Azure AD account/token can result in full control of storage account resources.
Use of public access: Adversaries may leverage publicly exposed storage accounts to list containers/blobs and their properties, information that can be beneficial as the attack advances. Adversaries may employ application programming interfaces (APIs), such as the List Blobs This technique is oftentimes reported as the exploitation vector used in targeted campaigns.
Stage 3: Persistence
Adversaries are trying to maintain their foothold. Persistence consists of techniques that adversaries use to keep access to systems across changed credentials and other interruptions that could cut off their access. Techniques used for persistence include any access, action, or configuration changes that let them maintain their foothold on systems.
Firewalls and Virtual Networks configuration changes: Storage services offer a set of built-in security features. Administrators can leverage these capabilities to restrict access to storage resources. Restriction rules can operate at the IP level. When network rules are configured, only requests originated from authorized subnets will be served. Adversaries may insert additional rules to ensure persistent access.
Role-based access control (RBAC) changes: Storage services offer built-in RBAC roles that encompass sets of permissions used to access different data types. Definition of custom roles is also supported. Upon assignment of an RBAC role to an identity object (like Azure AD security principal) the storage provider grants access to that security principal. Adversaries may leverage the RBAC mechanism to ensure persistent access to their owned identity objects.
Stage 4: Defense evasion
Adversaries are trying to avoid being detected. Defense evasion consists of techniques that adversaries use to avoid detection throughout their compromise. Techniques used for defense evasion include abuse trusted processes to hide and masquerade their malicious intents. Other tactics’ techniques are cross-listed here and include the added benefit of subverting defenses.
Firewalls and Virtual Networks configuration changes: Storage services offer a set of built-in security features. Administrators can leverage these capabilities to restrict access to storage resources. Restriction rules can operate at the IP level. When network rules are configured, only requests originated from authorized subnets will be served. Adversaries may insert additional rules to masquerade and/or legitimatize their data exfiltration channel.
RBAC changes: Storage services offer built-in RBAC roles that encompass sets of permissions used to access different data types. Definition of custom roles is also supported. Upon assignment of an RBAC role to an identity object (like Azure AD security principal) the storage provider grants access to that security principal. Adversaries may leverage the RBAC mechanism to disguise their activities as typical within a compromised environment.
Storage data clone: Storage services offer different types of cloning or backup data stored on them. Adversaries may abuse these built-in capabilities to steal sensitive documents, source code, credentials, and other business crucial information. This technique was employed as part of Capital One data theft.
Data transfer size limits: Adversaries may fragment stolen information and exfiltrate it on different size chunks to avoid being detected by triggering potentially predefined transfer threshold alerts.
Automated exfiltration: Adversaries may exploit legitimate automation processes, predefined by the compromised organization, with the goal of having their logging traces blend in normally within the company’s typical activities. Assimilating or disguising malicious intentions will keep adversary actions, such as data theft, stealthier.
Access control list (ACL) modification: Adversaries may adjust ACL configuration at the granularity of specific a blob or container, to secure a channel to exfiltrate stolen data. These ACL modifications occur at the control-plane level, which is oftentimes overlooked. By narrowing existing exposure restrictions, adversaries may infiltrate an organization’s internal and sensitive resources.
Stage 5: Credential Access
Credential Access consists of techniques for stealing credentials like account names and passwords. Techniques used to get credentials include keylogging or credential dumping. Using legitimate credentials can give adversaries access to systems, make them harder to detect, and provide the opportunity to create more accounts to help achieve their goals.
Access query key: Adversaries may leverage subscription/account-level access to gather storage account keys and use these keys to authenticate at the resource level. This technique exhibits cloud resource pivoting in combination with control management and data planes. Adversaries can query management APIs to fetch primary and secondary storage account keys.
Access Cloud Shell profiles: Cloud Shell is an interactive, authenticated, browser-accessible shell for managing cloud resources. It provides the flexibility of shell experience, either Bash or PowerShell. To support the Cloud Shell promise of being accessible from everywhere, Cloud Shell profiles and session history are saved on storage account. Adversaries may leverage the legitimate use of Cloud Shell to impersonate account owners and potentially obtain additional secrets logged as part of session history.
Stage 6: Discovery
Adversaries are trying to figure out your environment. Discovery consists of techniques adversaries may use to gain knowledge about the system. These techniques help adversaries observe the environment and orient themselves before deciding how to act. Tools witnesses, at the reconnaissance phase, are often used toward this post-compromise information-gathering objective.
Storage service discovery: Adversaries may leverage subscription/account-level access to discover storage properties and stored resources. Tools witnessed, at the reconnaissance phase, are oftentimes used toward this post-compromise information-gathering objective, now with authorization to access storage APIs, such as the List Blobs call.
Stage 7: Lateral movement
Adversaries are trying to move through your environment. Lateral movement consists of techniques that adversaries use to enter and control remote systems on a network. Reaching their objective often involves pivoting through multiple systems and accounts to gain access. Adversaries may install their own remote access tools (RAT) to accomplish lateral movement or use legitimate credentials with native network and operating system tools, which may be stealthier.
Malicious content upload: Adversaries may use storage services to store a malicious program or toolset that will be executed at later times during their operation. In addition, adversaries may exploit the trust between users and their organization’s Storage services by storing phishing content. Furthermore, storage services can be leveraged to park gathered intelligence that will be exfiltrated when terms suit the actor group.
Malware distribution: Storage services offer different types of mechanisms to support auto-synchronization between various resources and the storage account. Adversaries may leverage access to the storage account to upload malware and benefit from the auto-sync built-in capabilities to have their payload being populated and potentially weaponize multiple systems.
Trigger cross-service interaction: Adversaries may manipulate storage services to trigger a compute service (like Azure Functions/AWS Lambda triggers), where an attacker already has a foothold on a storage container and can inject a blob that will initiate a chain of a compute process. This may allow an attacker to infiltrate another resource and cause harm.
Data manipulation: Content stored on a storage service may be tainted by adding malicious programs, scripts, or exploit code to otherwise valid files. Upon execution by a legitimate user of tainted content, the malicious portion runs the adversary’s code on a remote system. Adversaries may use tainted shared content to move laterally.
Access Cloud Shell profiles: Cloud Shell is an interactive, authenticated, browser-accessible shell for managing cloud resources. It provides the flexibility of shell experience, either Bash or PowerShell. To support the Cloud Shell promise of being accessible from everywhere, Cloud Shell profiles and session history are saved on storage account. Adversaries may leverage the legitimate use of Cloud Shell to impersonate account owners and potentially obtain additional secrets logged as part of session history.
Stage 8: Exfiltration
Adversaries are trying to steal data. Exfiltration consists of techniques that adversaries may use to steal data from your network. Once they’ve collected data, adversaries often package it to avoid detection while removing it. This can include compression and encryption. Techniques for getting data out of a target network typically includes transferring it over their command-and-control channel or an alternative channel and may also include putting size limits on the transmission.
Storage data clone: Storage services offer different types of cloning or backup data stored on them. Adversaries may abuse these built-in capabilities to steal sensitive documents, source code, credentials, and other business crucial information. This technique has been employed as part of data theft previously.
Data transfer size limits: Adversaries may fragment stolen information and exfiltrate it on different size chunks to avoid being detected by triggering potentially predefined transfer threshold alerts.
Automated exfiltration: Adversaries may exploit legitimate automation processes, predefined by the compromised organization, with the goal of having their logging traces blend in normally within the company’s typical activities. Assimilating or disguising malicious intentions will keep adversary actions, such as data theft, stealthier.
ACL modification: Adversaries may adjust ACL configuration at the granularity of a specific blob or container, to secure a channel to exfiltrate stolen data. These ACL modifications occur at the control-plane level, which is oftentimes overlooked. By narrowing existing exposure restrictions, adversaries may infiltrate an organization’s internal and sensitive resources.
Stage 9: Impact
Adversaries are trying to manipulate, interrupt, or destroy your systems and data. Impact consists of techniques that adversaries use to disrupt availability or compromise integrity by manipulating business and operational processes. These techniques might be used by adversaries to follow through on their end goal or to provide cover for a confidentiality breach.
Data corruption: Adversaries may corrupt data stored on storage services to disrupt the availability of systems or other lines of business.
Data encryption for impact (ransomware): Adversaries may encrypt data stored on storage services to disrupt the availability of systems or other lines of business. Making resources inaccessible by encrypting files or blobs and withholding access to a decryption key. This may be done to extract monetary compensation from a victim in exchange for decryption or a decryption key (ransomware).
Get started today
Understanding the attack surface of data-focused services is the first step of building security solutions for these environments. The threat matrix for storage can help organizations identify gaps in their defenses. We encourage you to try Azure Defender for Storage and start protecting against potential threats targeting your blobs, containers, and file shares. Azure Defender for Storage should be enabled on storage accounts storing sensitive information. For a list of the Azure Defender for Storage alerts, see the reference table of alerts.
To learn more about Microsoft Security solutions visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.
Biologists have long struggled with the necessity of imaging minute biological processes – such as flowing blood cells or neurons moving through the brain – across long time-scales, as even supercomputers often strain to produce 3D imaging of those processes at longer than a few milliseconds. At UCLA, researchers are aiming to bridge that gap, enabling more advanced dynamic imaging microscopy of tissue samples – with the help of AI.
Currently, optical technologies are not advanced enough to capture these processes at the desirable spatiotemporal resolutions – so the researchers took a computational imaging technique called light-field microscopy for 3D imaging and supercharged it with a neural network. “Different from conventional microscopy, the tool reconstructed the 3D biological sample based on one snapshot through post-processing instead of scanning in the captured stage,” explained Zhaoqiang Wang, a doctoral student in bioengineering at UCLA’s Samueli School of Engineering and lead author on the paper. “The resulting temporal resolution of the images was drastically improved.”
The tool capturing flowing blood cells in a heart. Image courtesy of the researchers.
To train the neural network, the researchers used 3D image stacks paired with images from light-field microscopy, teaching the network to be able to reconstruct the 3D images based on the light-field imaging. The resulting tool was tested on roundworms and zebrafish, where it was (respectively) used to track fluorescent tags and record the movements of blood and cardiac cells. The tool achieved 200 cubic frames per second and identified processes that occurred at spatial resolutions smaller than a grain of salt.
“This new system allows us to see biological events live in what is essentially five dimensions — the three dimensions of space, plus time and the molecular level dynamics as highlighted by color spectra,” said Tzung Hsiai, a professor of cardiology at UCLA and co-author of the paper. “For doctors and scientists, this could reveal the fine details of what’s happening in microscopic spaces and over millisecond-length time scales in a way that has never been done before. This advance can go a long way in helping find new insights to understand and treat diseases.”
The research was published as “Real-time volumetric reconstruction of biological dynamics with light-field microscopy and deep learning” in the February 2021 issue of Nature Methods. The article was written by Zhaoqiang Wang, Lanxin Zhu, Hao Zhang, Guo Li, Chengqiang Yi, Yi Li, Yicong Yang, Yichen Ding, Mei Zhen, Shangbang Gao, Tzung Hsiai, and Peng Fei. It can be accessed at this link.
Although casino games have been present in the history of mankind for centuries as a popular entertainment activity, today they do not have clear and transparent laws in most countries. The question is: do we know anyone who has never heard of famous games like poker or roulette? The answer seems obvious, so it is difficult to understand why there are not yet clear regulations on these activities.
A recent problem in casino games history
Let’s first remember that time when casino games were only accessible to the public if they were moved to one of its magnificent venues. By then, exerting control over gambling was possible: only adults of legal age could enter, people with addictions could be prohibited from playing and, if necessary, people could stay for only a specific period of time.
However, the landscape changed, Internet access became popular, and casinos had to adapt to a new technological move. With the digital age, that controls the casino industry had before disappeared: it was much easier for minors to access gambling activities, and people with addiction problems could continue playing. As a consequence, an environment of very strong distrust was created among the world population around online casinos.
The importance of investing in marketing campaigns
From the first moment that casinos began to build their online presence, the industry has taken its predictions, as is the case of monitoring those users who present suspicious behavior. Nevertheless, these efforts have not been enough to convince people. That is why marketing has played an important role in gaining people’s trust, a strategy that has worked in some countries such as Brazil or the United Kingdom.
Today there are many ways to manage a marketing campaign. The most successful are those that involve well-known influencers or actors. Such is the case of Sunny Leone in India, a country where online casino laws are in a gray stage. The prestigious Bollywood actress is currently JeetWin’s Brand Ambassador, communicating her relation with the brand throughout interviews and her social media channels.
Linking a known character with a brand automatically creates a different image of the brand in the public. It is a strategy that, while it may be a costly investment for the brand, over time will help it find its goals. In the case of Sunny Leone, people already know her trajectory and, from her private life, they know that she has a family to whom she feels very attached. And this works in favor of the purposes of the brand.
The greater the visibility of a brand, the public will be able to know it better and, in addition, it will be able to reach all stages, making its communication something familiar. In the case of the legal situation of online casino games, good marketing strategies will get governments to take the necessary actions to create clear laws that protect the industry and those involved.
Beyond offering entertainment, the online gaming industry also opens the door to investment and new jobs. Behind the creation of a game, there is a great team of creatives, designers and developers, highly prepared people who have a lot to contribute to the tech world.
The Seattle skyline and Mount Rainier. (GeekWire Photo / Kurt Schlosser)
New report: A market analysis for the first quarter of 2021 from the commercial real estate company Broderick Group expresses optimism that after a year of pandemic-induced working from home, organizations and employees will become more confident to return to the office in the Seattle area. The group is foreseeing a rise in demand and space inquiries as COVID-19 vaccination levels continue to rise, even as some companies adopt distributed work models and hybrid workplace protocols.
Key findings: Even as the report references the Puget Sound office market as being among the most resilient in the country, the rebound to pre-pandemic activity will take some time, and 2021 will be a “recovery year” for the market.
Seattle’s direct vacancy rate of 8.11% and sublease vacancy rate of 6.4% makes for a combined overall vacancy of 14.5%, the highest that number has been since 2010.
(Broderick Group Graphic)
The first quarter of 2020 brought 14 significant leases totaling approximately 456,000 square feet, while Q1 of 2021 saw just four significant leases totaling approximately 65,000 square feet.
Eastside growth: East of Seattle, especially in downtown Bellevue, Wash., 2020 was “exceptionally difficult,” the report says. And the first quarter of 2021 was “unquestionably brutal” as the vacancy rate rose to 9.5% driven by a glut of sublease space. But growth from tech giants Amazon, Microsoft, Facebook and Google, a reemergence of tenant activity and the rise of new towers in downtown Bellevue is fueling optimism for the rest of 2021 and beyond.
SpaceX is also growing on the Eastside, leasing a 124,907-square-foot building complex that’s under construction in Redmond Ridge Business Park.
Top market for tech offices: A previous report from CBRE, on the strength of early-2020 activity, found the Seattle region to be the No. 1 market in the U.S. for tech office space, eclipsing the San Francisco Bay Area for the first time since 2013.
Tech companies coming back: As large tech employers start to bring workers back to offices across the region, real estate watchers are keeping an eye on the domino effect across other businesses.
Broderick Group singled out Amazon, the largest employer in Washington state, saying that “their plans and employee happiness and success will drive their competitors and other companies to occupy their offices sooner than they may have previously planned.”
Amazon told employees in a memo last month that it expects most U.S. corporate office workers back in the office by early fall. It was the most recent update on remote work since Amazon said employees could continue to do their jobs from home through June 30.
Microsoft started bringing employees back recently to its Redmond, Wash., headquarters campus while also releasing new details about its plans for a hybrid workplace model.
Google announced this week that it will start opening some Seattle-area office buildings to employees for optional in-person work on April 20.
Seattle-based real estate company Zillow Group says that it will remain committed to its downtown office location for people who want to collaborate in person, but it’s going forward with a distributed workforce model. Plans to hire more than 2,000 employees this year won’t be focused on a centralized headquarters.
A pair of robot legs called Cassie has been taught to walk using reinforcement learning, the training technique that teaches AIs complex behavior via trial and error. The two-legged robot learned a range of movements from scratch, including walking in a crouch and while carrying an unexpected load.
But can it boogie? Expectations for what robots can do run high thanks to viral videos put out by Boston Dynamics, which show its humanoid Atlas robot standing on one leg, jumping over boxes, and dancing. These videos have racked up millions of views and have even been parodied. The control Atlas has over its movements is impressive, but the choreographed sequences probably involve a lot of hand-tuning. (Boston Dynamics has not published details, so it’s hard to say how much.)
“These videos may lead some people to believe that this is a solved and easy problem,” says Zhongyu Li at the University of California, Berkeley, who worked on Cassie with his colleagues. “But we still have a long way to go to have humanoid robots reliably operate and live in human environments.” Cassie can’t yet dance, but teaching the human-size robot to walk by itself puts it several steps closer to being able to handle a wide range of terrain and recover when it stumbles or damages itself.
Virtual limitations: Reinforcement learning has been used to train many bots to walk inside simulations, but transferring that ability to the real world is hard. “Many of the videos that you see of virtual agents are not at all realistic,” says Chelsea Finn, an AI and robotics researcher at Stanford University, who was not involved in the work. Small differences between the simulated physical laws inside a virtual environment and the real physical laws outside it—such as how friction works between a robot’s feet and the ground—can lead to big failures when a robot tries to apply what it has learned. A heavy two-legged robot can lose balance and fall if its movements are even a tiny bit off.
Double simulation: But training a large robot through trial and error in the real world would be dangerous. To get around these problems, the Berkeley team used two levels of virtual environment. In the first, a simulated version of Cassie learned to walk by drawing on a large existing database of robot movements. This simulation was then transferred to a second virtual environment called SimMechanics that mirrors real-world physics with a high degree of accuracy—but at a cost in running speed. Only once Cassie seemed to walk well there was the learned walking model loaded into the actual robot.
The real Cassie was able to walk using the model learned in simulation without any extra fine-tuning. It could walk across rough and slippery terrain, carry unexpected loads, and recover from being pushed. During testing, Cassie also damaged two motors in its right leg but was able to adjust its movements to compensate. Finn thinks that this is exciting work. Edward Johns, who leads the Robot Learning Lab at Imperial College London agrees. “This is one of the most successful examples I have seen,” he says.
The Berkeley team hopes to use their approach to add to Cassie’s repertoire of movements. But don’t expect a dance-off anytime soon.
To stay ahead of adversaries, who show no restraint in adopting tools and techniques that can help them attain their goals, Microsoft continues to harness AI and machine learning to solve security challenges. One area we’ve been experimenting on is autonomous systems. In a simulated enterprise network, we examine how autonomous agents, which are intelligent systems that independently carry out a set of operations using certain knowledge or parameters, interact within the environment and study how reinforcement learning techniques can be applied to improve security.
Today, we’d like to share some results from these experiments. We are open sourcing the Python source code of a research toolkit we call CyberBattleSim, an experimental research project that investigates how autonomous agents operate in a simulated enterprise environment using high-level abstraction of computer networks and cybersecurity concepts. The toolkit uses the Python-based OpenAI Gym interface to allow training of automated agents using reinforcement learning algorithms. The code is available here: https://github.com/microsoft/CyberBattleSim
CyberBattleSim provides a way to build a highly abstract simulation of complexity of computer systems, making it possible to frame cybersecurity challenges in the context of reinforcement learning. By sharing this research toolkit broadly, we encourage the community to build on our work and investigate how cyber-agents interact and evolve in simulated environments, and research how high-level abstractions of cyber security concepts help us understand how cyber-agents would behave in actual enterprise networks.
This research is part of efforts across Microsoft to leverage machine learning and AI to continuously improve security and automate more work for defenders. A recent study commissioned by Microsoft found that almost three-quarters of organizations say their teams spend too much time on tasks that should be automated. We hope this toolkit inspires more research to explore how autonomous systems and reinforcement learning can be harnessed to build resilient real-world threat detection technologies and robust cyber-defense strategies.
Applying reinforcement learning to security
Reinforcement learning is a type of machine learning with which autonomous agents learn how to conduct decision-making by interacting with their environment. Agents may execute actions to interact with their environment, and their goal is to optimize some notion of reward. One popular and successful application is found in video games where an environment is readily available: the computer program implementing the game. The player of the game is the agent, the commands it takes are the actions, and the ultimate reward is winning the game. The best reinforcement learning algorithms can learn effective strategies through repeated experience by gradually learning what actions to take in each state of the environment. The more the agents play the game, the smarter they get at it. Recent advances in the field of reinforcement learning have shown we can successfully train autonomous agents that exceed human levels at playing video games.
Last year, we started exploring applications of reinforcement learning to software security. To do this, we thought of software security problems in the context of reinforcement learning: an attacker or a defender can be viewed as agents evolving in an environment that is provided by the computer network. Their actions are the available network and computer commands. The attacker’s goal is usually to steal confidential information from the network. The defender’s goal is to evict the attackers or mitigate their actions on the system by executing other kinds of operations.
Figure 1. Mapping reinforcement learning concepts to security
In this project, we used OpenAI Gym, a popular toolkit that provides interactive environments for reinforcement learning researchers to develop, train, and evaluate new algorithms for training autonomous agents. Notable examples of environments built using this toolkit include video games, robotics simulators, and control systems.
Computer and network systems, of course, are significantly more complex than video games. While a video game typically has a handful of permitted actions at a time, there is a vast array of actions available when interacting with a computer and network system. For instance, the state of the network system can be gigantic and not readily and reliably retrievable, as opposed to the finite list of positions on a board game. Even with these challenges, however, OpenAI Gym provided a good framework for our research, leading to the development of CyberBattleSim.
How CyberBattleSim works
CyberBattleSim focuses on threat modeling the post-breach lateral movement stage of a cyberattack. The environment consists of a network of computer nodes. It is parameterized by a fixed network topology and a set of predefined vulnerabilities that an agent can exploit to laterally move through the network. The simulated attacker’s goal is to take ownership of some portion of the network by exploiting these planted vulnerabilities. While the simulated attacker moves through the network, a defender agent watches the network activity to detect the presence of the attacker and contain the attack.
To illustrate, the graph below depicts a toy example of a network with machines running various operating systems and software. Each machine has a set of properties, a value, and pre-assigned vulnerabilities. Black edges represent traffic running between nodes and are labelled by the communication protocol.
Figure 2. Visual representation of lateral movement in a computer network simulation
Suppose the agent represents the attacker. The post-breach assumption means that one node is initially infected with the attacker’s code (we say that the attacker owns the node). The simulated attacker’s goal is to maximize the cumulative reward by discovering and taking ownership of nodes in the network. The environment is partially observable: the agent does not get to see all the nodes and edges of the network graph in advance. Instead, the attacker takes actions to gradually explore the network from the nodes it currently owns. There are three kinds of actions, offering a mix of exploitation and exploration capabilities to the agent: performing a local attack, performing a remote attack, and connecting to other nodes. Actions are parameterized by the source node where the underlying operation should take place, and they are only permitted on nodes owned by the agent. The reward is a float that represents the intrinsic value of a node (e.g., a SQL server has greater value than a test machine).
In the depicted example, the simulated attacker breaches the network from a simulated Windows 7 node (on the left side, pointed to by an orange arrow). It proceeds with lateral movement to a Windows 8 node by exploiting a vulnerability in the SMB file-sharing protocol, then uses some cached credential to sign into another Windows 7 machine. It then exploits an IIS remote vulnerability to own the IIS server, and finally uses leaked connection strings to get to the SQL DB.
This environment simulates a heterogenous computer network supporting multiple platforms and helps to show how using the latest operating systems and keeping these systems up to date enable organizations to take advantage of the latest hardening and protection technologies in platforms like Windows 10. The simulation Gym environment is parameterized by the definition of the network layout, the list of supported vulnerabilities, and the nodes where they are planted. The simulation does not support machine code execution, and thus no security exploit actually takes place in it. We instead model vulnerabilities abstractly with a precondition defining the following: the nodes where the vulnerability is active, a probability of successful exploitation, and a high-level definition of the outcome and side-effects. Nodes have preassigned named properties over which the precondition is expressed as a Boolean formula.
Vulnerability outcomes
There are predefined outcomes that include the following: leaked credentials, leaked references to other computer nodes, leaked node properties, taking ownership of a node, and privilege escalation on the node. Examples of remote vulnerabilities include: a SharePoint site exposing ssh credentials, an ssh vulnerability that grants access to the machine, a GitHub project leaking credentials in commit history, and a SharePoint site with file containing SAS token to storage account. Meanwhile, examples of local vulnerabilities include: extracting authentication token or credentials from a system cache, escalating to SYSTEM privileges, escalating to administrator privileges. Vulnerabilities can either be defined in-place at the node level or can be defined globally and activated by the precondition Boolean expression.
Benchmark: Measuring progress
We provide a basic stochastic defender that detects and mitigates ongoing attacks based on predefined probabilities of success. We implement mitigation by reimaging the infected nodes, a process abstractly modeled as an operation spanning multiple simulation steps. To compare the performance of the agents, we look at two metrics: the number of simulation steps taken to attain their goal and the cumulative rewards over simulation steps across training epochs.
Modeling security problems
The parameterizable nature of the Gym environment allows modeling of various security problems. For instance, the snippet of code below is inspired by a capture the flag challenge where the attacker’s goal is to take ownership of valuable nodes and resources in a network:
Figure 3. Code describing an instance of a simulation environment
We provide a Jupyter notebook to interactively play the attacker in this example:
Figure 4. Playing the simulation interactively
With the Gym interface, we can easily instantiate automated agents and observe how they evolve in such environments. The screenshot below shows the outcome of running a random agent on this simulation—that is, an agent that randomly selects which action to perform at each step of the simulation.
Figure 5. A random agent interacting with the simulation
The above plot in the Jupyter notebook shows how the cumulative reward function grows along the simulation epochs (left) and the explored network graph (right) with infected nodes marked in red. It took about 500 agent steps to reach this state in this run. Logs reveal that many attempted actions failed, some due to traffic being blocked by firewall rules, some because incorrect credentials were used. In the real world, such erratic behavior should quickly trigger alarms and a defensive XDR system like Microsoft 365 Defender and SIEM/SOAR system like Azure Sentinel would swiftly respond and evict the malicious actor.
Such a toy example allows for an optimal strategy for the attacker that takes only about 20 actions to take full ownership of the network. It takes a human player about 50 operations on average to win this game on the first attempt. Because the network is static, after playing it repeatedly, a human can remember the right sequence of rewarding actions and can quickly determine the optimal solution.
For benchmarking purposes, we created a simple toy environment of variable sizes and tried various reinforcement algorithms. The following plot summarizes the results, where the Y-axis is the number of actions taken to take full ownership of the network (lower is better) over multiple repeated episodes (X-axis). Note how certain algorithms such as Q-learning can gradually improve and reach human level, while others are still struggling after 50 episodes!
Figure 6. Number of iterations along epochs for agents trained with various reinforcement learning algorithms
The cumulative reward plot offers another way to compare, where the agent gets rewarded each time it infects a node. Dark lines show the median while the shadows represent one standard deviation. This shows again how certain agents (red, blue, and green) perform distinctively better than others (orange).
Figure 7. Cumulative reward plot for various reinforcement learning algorithms
Generalizing
Learning how to perform well in a fixed environment is not that useful if the learned strategy does not fare well in other environments—we want the strategy to generalize well. Having a partially observable environment prevents overfitting to some global aspects or dimensions of the network. However, it does not prevent an agent from learning non-generalizable strategies like remembering a fixed sequence of actions to take in order. To better evaluate this, we considered a set of environments of various sizes but with a common network structure. We train an agent in one environment of a certain size and evaluate it on larger or smaller ones. This also gives an idea of how the agent would fare on an environment that is dynamically growing or shrinking while preserving the same structure.
To perform well, agents now must learn from observations that are not specific to the instance they are interacting with. They cannot just remember node indices or any other value related to the network size. They can instead observe temporal features or machine properties. For instance, they can choose the best operation to execute based on which software is present on the machine. The two cumulative reward plots below illustrate how one such agent, previously trained on an instance of size 4 can perform very well on a larger instance of size 10 (left), and reciprocally (right).
Figure 8. Cumulative reward function for an agent pre-trained on a different environment
An invitation to continue exploring the applications of reinforcement learning to security
When abstracting away some of the complexity of computer systems, it’s possible to formulate cybersecurity problems as instances of a reinforcement learning problem. With the OpenAI toolkit, we could build highly abstract simulations of complex computer systems and easily evaluate state-of-the-art reinforcement algorithms to study how autonomous agents interact with and learn from them.
A potential area for improvement is the realism of the simulation. The simulation in CyberBattleSim is simplistic, which has advantages: Its highly abstract nature prohibits direct application to real-world systems, thus providing a safeguard against potential nefarious use of automated agents trained with it. It also allows us to focus on specific aspects of security we aim to study and quickly experiment with recent machine learning and AI algorithms: we currently focus on lateral movement techniques, with the goal of understanding how network topology and configuration affects these techniques. With such a goal in mind, we felt that modeling actual network traffic was not necessary, but these are significant limitations that future contributions can look to address.
On the algorithmic side, we currently only provide some basic agents as a baseline for comparison. We would be curious to find out how state-of-the art reinforcement learning algorithms compare to them. We found that the large action space intrinsic to any computer system is a particular challenge for reinforcement learning, in contrast to other applications such as video games or robot control. Training agents that can store and retrieve credentials is another challenge faced when applying reinforcement learning techniques where agents typically do not feature internal memory. These are other areas of research where the simulation could be used for benchmarking purposes.
The code we are releasing today can also be turned into an online Kaggle or AICrowd-like competition and used to benchmark performance of latest reinforcement algorithms on parameterizable environments with large action space. Other areas of interest include the responsible and ethical use of autonomous cybersecurity systems. How does one design an enterprise network that gives an intrinsic advantage to defender agents? How does one conduct safe research aimed at defending enterprises against autonomous cyberattacks while preventing nefarious use of such technology?
With CyberBattleSim, we are just scratching the surface of what we believe is a huge potential for applying reinforcement learning to security. We invite researchers and data scientists to build on our experimentation. We’re excited to see this work expand and inspire new and innovative ways to approach security problems.
An example of a virtual GeekWire office built with Spot. (Spot Photos)
New funding: Spot, a year-old company led by the founders of billion-dollar startup Outreach, has raised $1.7 million.
Company background: GeekWire initially spotted Spot back in June when it was in stealth mode. The 7-person startup is building software that creates a virtual representation of an office. Spot CEO Gordon Hempton compared it to “The Sims for the Enterprise.”
Hundreds of virtual worlds have been created with Spot for events and/or a virtual headquarters. The software is still in a closed beta. “In the future, we plan on creating a true digital world that connects teams to each other both inside and outside of their organizations,” Hempton said.
Tailwinds: The pandemic-driven shift to remote work is driving adoption of virtual work tools — everything from video conferencing to collaboration software. Spot competes with startups such as virtual office space platforms Gather, which just raised $26 million, and Teamflow, which just raised $11 million. There are also a bevy of others such as Virbela and Branch that sell similar virtual HQ software.
Some companies including Amazon want to get workers fully back in the office but many — such as Zillow Group and Microsoft — are rolling out hybrid workplaces or distributed workforce models. That’s good news for startups such as Spot.
Founders: Hempton left Outreach in October 2019, while his co-founder Wes Hather departed in 2020. They helped launch the Seattle unicorn startup seven years ago with Andrew Kinzer and Manny Medina. Kinzer left in March 2020 and Medina remains CEO of Outreach, which sells sales automation software and raised a $50 million round in June. Medina told GeekWire previously that his co-founders left on good terms.
Before their Outreach days, Hempton and Hather helped start a Y Combinator graduate called Team Apart that built a real-time web collaboration tool for remote teams.
Investors: Seattle firm Founders’ Co-op is among Spot’s backers. The investment in Spot is the firm’s largest initial investment to date and the first out of Founders’ Co-op new fund. The firm is familiar with Hempton and Hather as it was an initial investor in Outreach.
Cyberattacks continue to grow in prevalence and sophistication. With the ability to disrupt business operations, wipe out critical data, and cause reputational damage, they pose an existential threat to businesses, critical services, and infrastructure. Today’s new wave of attacks is outsmarting and outpacing humans, and even starting to incorporate artificial intelligence (AI). What’s known as “offensive AI” will enable cybercriminals to direct targeted attacks at unprecedented speed and scale while flying under the radar of traditional, rule-based detection tools.
Some of the world’s largest and most trusted organizations have already fallen victim to damaging cyberattacks, undermining their ability to safeguard critical data. With offensive AI on the horizon, organizations need to adopt new defenses to fight back: the battle of algorithms has begun.
MIT Technology Review Insights, in association with AI cybersecurity company Darktrace, surveyed more than 300 C-level executives, directors, and managers worldwide to understand how they’re addressing the cyberthreats they’re up against—and how to use AI to help fight against them.
As it is, 60% of respondents report that human-driven responses to cyberattacks are failing to keep up with automated attacks, and as organizations gear up for a greater challenge, more sophisticated technologies are critical. In fact, an overwhelming majority of respondents—96%—report they’ve already begun to guard against AI-powered attacks, with some enabling AI defenses.
Offensive AI cyberattacks are daunting, and the technology is fast and smart. Consider deepfakes, one type of weaponized AI tool, which are fabricated images or videos depicting scenes or people that were never present, or even existed.
In January 2020, the FBI warned that deepfake technology had already reached the point where artificial personas could be created that could pass biometric tests. At the rate that AI neural networks are evolving, an FBI official said at the time, national security could be undermined by high-definition, fake videos created to mimic public figures so that they appear to be saying whatever words the video creators put in their manipulated mouths.
This is just one example of the technology being used for nefarious purposes. AI could, at some point, conduct cyberattacks autonomously, disguising their operations and blending in with regular activity. The technology is out there for anyone to use, including threat actors.
Offensive AI risks and developments in the cyberthreat landscape are redefining enterprise security, as humans already struggle to keep pace with advanced attacks. In particular, survey respondents reported that email and phishing attacks cause them the most angst, with nearly three quarters reporting that email threats are the most worrisome. That breaks down to 40% of respondents who report finding email and phishing attacks “very concerning,” while 34% call them “somewhat concerning.” It’s not surprising, as 94% of detected malware is still delivered by email. The traditional methods of stopping email-delivered threats rely on historical indicators—namely, previously seen attacks—as well as the ability of the recipient to spot the signs, both of which can be bypassed by sophisticated phishing incursions.
When offensive AI is thrown into the mix, “fake email” will be almost indistinguishable from genuine communications from trusted contacts.
How attackers exploit the headlines
The coronavirus pandemic presented a lucrative opportunity for cybercriminals. Email attackers in particular followed a long-established pattern: take advantage of the headlines of the day—along with the fear, uncertainty, greed, and curiosity they incite—to lure victims in what has become known as “fearware” attacks. With employees working remotely, without the security protocols of the office in place, organizations saw successful phishing attempts skyrocket. Max Heinemeyer, director of threat hunting for Darktrace, notes that when the pandemic hit, his team saw an immediate evolution of phishing emails. “We saw a lot of emails saying things like, ‘Click here to see which people in your area are infected,’” he says. When offices and universities started reopening last year, new scams emerged in lockstep, with emails offering “cheap or free covid-19 cleaning programs and tests,” says Heinemeyer.
There has also been an increase in ransomware, which has coincided with the surge in remote and hybrid work environments. “The bad guys know that now that everybody relies on remote work. If you get hit now, and you can’t provide remote access to your employee anymore, it’s game over,” he says. “Whereas maybe a year ago, people could still come into work, could work offline more, but it hurts much more now. And we see that the criminals have started to exploit that.”
What’s the common theme? Change, rapid change, and—in the case of the global shift to working from home—complexity. And that illustrates the problem with traditional cybersecurity, which relies on traditional, signature-based approaches: static defenses aren’t very good at adapting to change. Those approaches extrapolate from yesterday’s attacks to determine what tomorrow’s will look like. “How could you anticipate tomorrow’s phishing wave? It just doesn’t work,” Heinemeyer says.
Arrow is a Python module for working with date and time. Given that there are several modules that do this, most notably the built-indatetime module, what makes Arrow different?
Most notably, the library is inspired by Moment.js, a JavaScript library that overrides the default implementation of the Date/Time API.
In this guide, we'll take a look at some key features of Arrow, to see how it handles certain common tasks.
First, let's go ahead and install it:
$ pip install Arrow
The Arrow Class
The Arrow class is an implementation of the datetime interface, with additional functionalities. Also, it's timezone-aware by default - we'll go into this in a bit later, though.
You can easily create a new Arrow instance by supplying its constructor with a few arguments:
Parsing a date and time from a string is a straightforward process with Arrow - you simply use the get() method, and supply it with a valid string format. Also, Arrow lets you effortlessly convert between its own implementation of datetime class and the built-in datetime object.
Convert String to Datetime with Arrow
If a string is already formatted in the ISO 8601 format (YYYY-MM-DDTHH:MM:SS.mmmmmm), it can be passed directly into the get() method:
This will print out the Arrow instance, which is Arrow's own implementation of the datetime interface:
<Arrow [2021-03-30T12:05:00+00:00]>
However, in practice, it is unlikely that we will be using correctly formatted strings, following the ISO specification.
Thankfully, we can still parse strings that don't adhere to conventions, by using correct Arrow format tokens. These are pre-defined and give Arrow the information needed to parse the string correctly:
Here, we've effectively told Arrow what the format is. It maps the supplied format tokens with the string we'd like to parse, and constructs an Arrow object based on that info. Running this results in:
<Arrow [2021-03-30T12:05:00+00:00]>
Convert Between Arrow and datetime Objects
So far, we've been working with Arrow instances. However, many applications and libraries explicitly require you to use a datetime object. Conversion between these two formats is crucial.
Let's take a look at the type() of our variable:
type(datetime)
Output:
arrow.arrow.Arrow
To convert this to a datetime instance, we simply extract the datetime field from the Arrow object:
datetime = datetime.datetime
print(datetime)
This results in a time-zone aware datetime instance:
Even though we haven't specified the timezone when creating the original Arrow object, the datetime object has the tzinfo defaulted to UTC. We will refer back to this in the next section, when taking a more detailed look into handling timezones.
For now, we can confirm it is indeed a datetime object type:
type(now)
This results in;
datetime.datetime
Similarly enough, you can easily convert datetime objects into Arrow objects, using the fromdatetime() function:
One of the major issues with the datetime module is the way it handles timezones. It is considered timezone-naive, meaning it contains no timezone related data. Arrow on the other hand contains a tzinfo parameter for each instantiation, which you can set through the constructor, or through methods. The tzinfo defaults to UTC, regardless of the user's location.
In practical terms this means that with datetime, a user in Hong Kong would be working in local Hong Kong time whereas a user in UK would be working in local UK time - unless otherwise specified:
Having a standard default timezone is increasingly important with the rise in remote working as well the globalization of projects. Having to explicitly set timezones for datetime objects gets stale fast. Arrow automates this process, in a single, unified, default timezone. You can set it to other timezones, of course, or even local ones.
Irrespective of the user's geographical location, the following lines of code will give the same output:
arrow.now()
Output:
<Arrow [2021-03-30T17:37:28.374335+01:00]>
We can confirm this corresponds to the UTC time:
arrow.utcnow()
Output:
<Arrow [2021-03-30T16:37:59.721766+00:00]>
We can set timezones on the Arrow instance simply by passing in the timezone string in its constructor:
arrow.now('US/Pacific)
This works alongside other parameters that we've used before. Setting a timezone while constructing an Arrow instance works both for converting strings and datetime objects and calling the constructor explicitly:
Another area worth looking at is how we can convert between different timezones with Arrow by using the to() method. Whilst Arrow is fully compatible with timezone modules, there is no need to import any additional modules for timezone conversion.
To start with we can get the current time and assign it to a variable:
utc=arrow.now()
print(utc)
<Arrow [2021-03-30T17:41:08.765166+01:00]>
Now, let's convert this object to another timezone:
utc.to('US/Pacific')
This results in:
<Arrow [2021-03-30T09:41:08.765166-07:00]>
Or the time in Hong Kong:
utc.to('Asia/Hong_Kong')
<Arrow [2021-03-30T09:41:08.765166-07:00]>
We can even specify the time difference with number of hours:
utc.to('-05:00')
This is a much more intuitive way to convert between timezones, especially if you don't have a list of the appropriate names handy:
<Arrow [2021-03-30T11:41:08.765166-05:00]>
Humanizing and Shifting Dates
Oftentimes, when dealing with time spans, we don't really need a date. When talking to colleagues, we say:
"I went shopping yesterday".
Not:
"I went shopping on 2021/12/03", which is the day before 2021/12/04", which is the day right now.
In a lot of cases, you might want to humanize dates, such as annotating when a notification arrived, an email was sent or when someone performed a certain logged action. Thankfully, Arrow allows us to really easily humanize any date via the humanize() function.
This function works wonders with the shift() function, which can shift the dates by 0...n days. You might want to make a system that notifies a person of when a certain date is coming up, in human-speak:
Here, we've created an Arrow object using the now() function. Then, we've created a series of objects by shifting the values up or down. The shift() function accepts arguments such as years, months, days, hours, minutes and seconds. We've created a tomorrow and yesterday object by shifting up and down by one day (not in-place), and an object for next_week and an arbitrary notification object that happened 2 days, 5 hours and 7 minutes ago.
Finally, when humanizing, you can specify the granularity, which lets Arrow know how detailed to be when it comes to reporting time in human-speak. By default, it'll set the granularity to day and/or week, depending on the time range. In our case, we've left the default settings on, except for the final object where we specifically want a bit of a finer granularity:
just now
in a day
a day ago
in a week
2 days 5 hours and 7 minutes ago
This is a very intuitive and human way to represent dates to your user - such as counting down days to an event, or counting days from an event.
Advantages of Using Arrow
The advantages of using Arrow could be summarized with the official statements from their documentation:
Sensible and human-friendly approach.
Supporting many common creation scenarios.
Help you work with dates and times with fewer imports and a lot less code.
In these, it appears to succeed, and Arrow overcomes the major issues with the datetime library. From what we have seen in our examples above Arrow is certainly an improvement in terms of:
Reducing the need for importing multiple modules
Working in just one data type (Arrow)
Being timezone-aware
Simplifying the creation of the most commonly used date and time functions
Easy conversion between various types
Helpful humanization functions
Easily shifting values into the past and future
Conclusion
In this guide, we've focused on some of the benefits of using the Arrow library to work with date and time in Python. It's inspired by Moment.js, and offers solutions to some of the known problems of the datetime library.
However, it is worth bearing in mind there are many other date and time modules available. It is also worth noting that being a built-in module gives datetime the advantage of needing users pro-actively looking to replace it.
It all comes down to how important the date and time element is to your project and your code.
In this article, we will learn how to deduce and calculate the Running Time of an Algorithm. Also, we will see how to analyze the Time Complexity of the Algorithm. This is very useful when it comes to analyzing the efficiency of our solution. It provides us with the insight to develop better solutions for problems to work on.
Now, the Running Time of an Algorithm may depend on a number of factors :
Whether the machine is a Single or Multiple Processor Machine.
It also depends on the cost of each Read/Write operation to Memory.
The configuration of machine – 32 bit or 64 bit Architecture.
The Size of Input given to the Algorithm.
But, when we talk about the Time Complexity of Algorithm we do not consider the first 3 factors. We are concerned with the last factor i.e. how our program behaves on different Input Sizes. So, mostly we consider the Rate of Growth of Time with respect to the input given to the program.
Now, to determine the Run time of our program, we define a Hypothetical Machine with the following characteristics: Single Processor, 32 bit Architecture. It executes instructions sequentially. We assume the machine takes 1 Unit of Time for each operation ( E.g. Arithmetical, Logical , Assignment, Return etc.).
We take a few examples and try to deduce the Rate of Growth with respect to the input.
Let’s say we have to write a program to find difference of two integers.
difference(a,b)
{
c = a-b -> 1 unit Time for Arithmetic Subtraction and 1 unit for Assignment
return c -> 1 unit Time for Return
}
Explanation:
This is the Pseudocode, if we run this program using the model Machine we defined, the total time taken is Tdiff = 1+1+1 =3 units. So we say irrespective of the size of inputs the time taken for execution is always 3 units or constant for every input. Hence, this a Constant Time Algorithm. So, Rate of Growth is a Constant function. To indicate the upper bound on the growth of algorithm we use Big-OAsymptotic Notation. So, to simplify time complexity is O(1) or constant time because the operations only happen once. Since each of our operations has a runtime of O(1), the Big O of our algorithm is O(1 + 1 + 1) = O(3), which we will then simplify to O(1) as we strip our constants and identify our highest-order term. Hence, the Running time will be O(1) .
Let us look at another example suppose we need to calculate the sum of elements in a list.
sumOfArray( A[], N) COST TIMES
{
sum=0 -> 1 units 1
for i=0 to N-1 -> 2 units N + 1 ( 1 unit for assignment + 1 for increment i)
sum = sum + A[i] -> 2 units N ( 1 unit for assignment + 1 unit for sum)
return sum -> 1 units 1
}
Explanation:
This is the Pseudocode for getting the sum of elements in a list or array. The total time taken for this algorithm will be the Cost of each operation * No. of times its executed. So, Tsum = 1 + 2 * (N+1) + 2* N + 1 = 4N + 4 .
The constants are not important to determine the running time. So, we see the Rate of Growth is a Linear Function, since it is proportional to N, size of array/list. So to simplify the running time and considering the highest order term we say the Running Time is is : O(N) .
Now, if we have to calculate the sum of elements in the matrix of size N*N. The Pseudocode looks like this.
sumOfMatrix( A[][], N) COST TIMES
{
total = 0 1 Unit 1
for i=0 to N-1 2 Units N + 1
for j=0 to N-1 2 Units (N + 1) * (N + 1)
total = total + A[i][j] 2 Units N * N
return total 1 Unit 1
}
Explanation:
The 1st for loop executes N+1 times for each row to reach end condition (i=n), the 2nd for loop executes (N+1) * (N+1) times for each cell in a column. So, the total time taken by the algorithm,
So on ignoring the lower order terms and constant we see the Rate of Growth of Algorithm is a Quadratic Function. It is proportional to N2 or the Size of the Matrix. If we plot a graph for the above three functions, for the time taken with respect to its inputs we see:
The Tdiff graph is constant, Tsum grows linearly with input n and TsumOfMatrix grows as a Square Function giving a Parabolic graph. So, in general, we say Running Time of Algorithm = Σ Running Time of All Fragments of Code.
That’s it for the article, you can try out various examples and follow the general thumb rule discussed to analyze the Time Complexity.
Feel free to leave your doubts in the comments section below.
On March 20, Kyle Niemer and Mallory Raven-Ellen Backstrom had the wedding of their dreams: intimate (around 40 guests), in a spacious venue with a dance floor, great food — and PCR tests on demand to check unvaccinated guests, administered by a doctor and nurse in the bridal party.
For two weeks, the couple was on edge. Niemer said he had “CNN dreams, where we were that wedding party with a covid outbreak.” “I was afraid,” agrees Backstrom, who announced she was pregnant at the wedding. “We had literally gone to every length to protect our guests. It was nerve-racking.”
While 2020 was marked by canceled or postponed weddings, 2021 is seeing a resurgence — albeit with ones that are smaller than pre-pandemic bashes. Couples like Niemer and Backstrom are navigating a tricky quagmire of ethics and etiquette to ensure the safety of their big day. While some are hosting on-site rapid testing, others — who can afford it — are requiring proof of vaccines, along with bouncers and “covid safety officers.”
The relaxation of state restrictions has helped weddings return, along with the widespread use and accessibility of PCR tests, considered the gold standard in detecting covid-19. Socially distant weddings were the first to emerge in the wake of lockdowns last spring and summer, along with “microweddings” and “minimonies” (pandemic-ese for small weddings of about 10 guests). Now vaccinations are offering the possibility of making weddings bigger, but they are also complicating the planning. The question remains: how do you keep guests safe? And how do you navigate the tricky etiquette around the topic of vaccination and testing with your guests?
The ethical questions
Those questions turn up almost daily on one of the internet’s biggest wedding channels, the subreddit r/WeddingPlanning, which has nearly 150,000 members. The usual queries of where to find dresses and how to handle a meddling future mother-in-law have been interrupted by questions on how to traverse mixed vaccinated/unvaccinated weddings. “Does anyone have good wording for how to communicate to guests that we’re transitioning to having a child-free wedding because kids won’t be eligible for vaccines yet?” one asks. “Bonus points if you show examples on how you worded it on the invite!” another says.
Redditors are posting sample covid inserts for paper invites for edits and thoughts. From Reddit
MELISSA DOLAN
Elisabeth Kramer, an Oregon-based wedding planner, says couples should be not only trying to figure out how to talk to their families but to their vendors as well. She’s created Google doc templates to help clients speak to caterers, florists, even the officiant about their vaccination or testing plans for the day
Radhika Graham, a wedding planner in Canada, says state-mandated gathering limits mean that couples are using wedding sites like Minted or questionnaires on SurveyMonkey to ask both guests and vendors how they were feeling and urging them to get (and record) vaccinations. But there’s no sugarcoating it: asking invasive health questions can rub guests the wrong way, and can dampen the celebratory mood of your wedding.
Julie-Ann Hutchinson and Kyle Burton, Baltimore-based health care professionals, went to extraordinary lengths to ensure their 40-person St. Louis wedding last September ran smoothly. They hired a “covid safety officer,” a nurse who, for $60 an hour for five hours, checked temperatures, asked guests how they felt, and handed out sanitizer and masks.
“My father came up with this idea, simply because he didn’t want family members to have to monitor the group and tell them to stand six feet apart,” Hutchinson said. “He wanted there to be an impartial neutral party.” That made sense to the couple but Hutchinson admits she thought, “He’s being ridiculous. Like what do I Google, ‘bouncer’? You can’t hire on TaskRabbit for this role. How do you even Google this?”
In the end, Burton’s aunt worked in the local military veterans hospital and knew someone who could help out, and the couple found themselves relieved of having to police their relatives. “I thought we were pandemic extra,” Hutchinson said (their wedding was profiled in the New York Times). “But it was a relief. She [the covid safety officer] would stare them down if they [guests] positioned themselves too closely.”
Neither Hutchinson nor Burton would change anything. “The conflict we faced was that we wanted to make the most of our time with our loved ones,” Burton says. “We had the option to delay the wedding entirely but we wanted to celebrate our love for each other and we wanted our family with us.”
Meet the covid concierge
The two couples—Niemer and Backstrom, Hutchinson and Burton—were lucky: They were able to use a connection to find a person on short notice at a relatively low cost to monitor their wedding. But for couples who don’t find such a monitor adequate nor have healthcare connections, “private covid concierge testing” is now a service you can buy in for your big day.
Asma Rashid’s boutique medical office in the Hamptons offered 35-minute turnaround testing for clients wanting to party last summer in the area’s beach houses. She’s already received requests for weddings this summer, including one she is helping a couple plan where vaccinations are explicitly required. “You’re not allowed to enter the party without proof of vaccination,” she says. “It’s not an honorary system.”
Rashid did not provide her rate, but similar services are popping up quickly online and aren’t cheap, ringing in at around $100 per test. One company, EventDoc, offers a deal for $1,500 testing for 20 guests in New York and Florida. Veritas, a Los Angeles-based startup, is gearing up for a busy wedding season outside its usual core clientele of film production crews who are required by law to be tested regularly. The company offers rapid tests for $75-$110 depending on the size of the group.
“We’ve been approved to do vaccinations by California,” says cofounder Kristopher Sims. The firm aims to eventually offer vaccinations at pre-wedding gatherings like bridal showers so guests are vaccinated in time for the wedding day—for a fee.
The demand for covid concierge services is not limited to weddings; summer graduations, bar/bat mitzvahs, quinceaneras, and any other gathering is fair game. But weddings are the most lucrative and dependable, spawning an emerging industry of rapid testing and verification services for those who can afford it. For a wedding list of even 10, those costs can quickly add up.
Simple solutions
“That’s where the challenge is: Big tech is creating a solution for the rich but in reality, it’s the masses that need it,” Ramesh Raskar says. Raskar is a professor at MIT’s Media Lab and is in the process of launching PathCheck, a paper card with a QR code that proves you are vaccinated. “It’s like a certificate,” Raskar says. When a person arrives at a venue, their QR code is checked along with a form of photo ID; if both check out, the person is permitted to enter.
On the surface, PathCheck ticks a lot of boxes: It’s pretty secure and, because Media Lab is a nonprofit, it is free—so far. And PathCheck is a paper product rather than a digital one, making it especially attractive for undocumented immigrants, the elderly, and those without internet access.
Tools like PathCheck are one possible route toward opening up safe, large gatherings to a person without much economic means in the United States. But it has drawbacks: PathCheck has to gain traction for people to trust and use it. And, as Veritas’s Sims and Capello note, there is currently no straightforward, national way to verify a person vaccinated in one state in another state. Even if there was—vaccine passports are far from an uncontroversial option.
Weddings have been another example of how the pandemic has exacerbated inequity. The decision to have a safe wedding—any gathering, really—this year has been dictated by wealth and access. Some couples can afford to have a medical professional moonlight as a covid bouncer or send at-home PCR tests. Others can’t and have to make the difficult decision to either cut their guest list down and hope for the best—or just wait until the summer and hope enough people have been vaccinated.
That won’t change soon. Sure, President Joe Biden has said every American adult is eligible for a vaccine by April 19, but children will remain unvaccinated for some time, and the April 19 date does not account for the bottleneck of people wanting vaccines but unable to access them because of demand. While it might be safe to assume most people are fully vaccinated by June, it will be hard to actually know—unless, of course, you have the money to find out.
On the other hand, wedding season might be a boon for pushing those who are vaccine hesitant toward getting a vaccine simply because of FOMO. In Israel, life is mostly back to pre-pandemic normality after its massive vaccination campaign, helped along by incentivizing vaccine skeptics to get the vaccine so they can be part of social activities, according to a recent JAMAarticle.
Similarly, Niemer and Backstrom said that the expected presence of two vulnerable people—Backstrom’s father, who has stage 4 lung cancer, and her 90-year-old grandmother—may have guilted people into getting the vaccine. “They [guests] knew the stakes,” Backstrom says. “Everyone was pretty much on their best behavior. We didn’t have guests who were stubborn and resistant.”
Windowsリボンフレームワークライブラリ for Delphiは、Eric Bilsenによって開発されており、ライセンス規約はこちらを参照ください。この製品のサポートは、Windowsリボンフレームワークライブラリ for Delphiの最新バージョンをご使用のうえ、stackoverflow.comへ投稿ください。
Bitcoin is a digital currency that is getting popular at a rapid pace. It is also an excellent investment as you can earn massive profits by trading it over the Internet. You can use the anon system app if you are new to bitcoin trading and want to get the best experience. Bitcoin trading is a bit complex, and there are several risks involved in it. If you want to minimize the risks and make maximum profits with bitcoin trading, you must follow the tips mentioned below.
Make good use of stop-loss and profit targets
Bitcoin trading is not an easy task as you need to have a lot of knowledge and experience. It is full of risks as a single mistake can make you lose a lot of money. So, there are some tools that you must use while trading bitcoins as they will help you to minimize the risks. Bitcoin’s price is highly volatile, which makes it difficult for traders to decide when they should buy and sell investments. So, you must use stop losses and profit targets as it helps to make accurate trading decisions with great ease and comfort.
Stop loss refers to the minimum price at which you are willing to sell the bitcoins as it helps to minimize the losses. Profit targets the minimum profit that you want to earn while trading bitcoins. It is beneficial, especially when the price suddenly fluctuates, as it helps you to take the right decision at the right time and make maximum profits. It ensures that you don’t make any wrong decision in panic.
Be heedful while using leverage
Leverage is a feature offered by some trading platforms which allow you to borrow some funds and use them for bitcoin trading. It is a fantastic feature for those who don’t have enough money for trading bitcoins but is also quite risky. You need to maintain a proper balance between risk and reward if you want to get maximum advantage of leveraged trading. It is highly advantageous, but if you don’t have good experience and knowledge, it can also may you to face massive financial losses.
If you are new to bitcoin trading, you better avoid using leverage; it involved significant risks, but if you have to use it, you must gain proper knowledge beforehand. With sufficient experience, you will be able to take maximum advantage of leverage bitcoin trading and make maximum profits.
Begin small
If you are trying your hands in bitcoin trading for the first time, you must start with a small investment as it will allow you to gain better knowledge and experience without taking big risks. The bitcoin market is volatile, which is why it is crucial, to begin with, a small investment as it will have minimal risks of facing any massive loss. Novice bitcoin traders should never make a big investment in the starting as it is a foolish thing to do and can easily kick back anytime.
So, if you want to become a successful Bitcoin trader, the best option is to make small investments. It will allow you to gain a proper understanding of the bitcoin market and make the right trading decisions with minimum risks. Moreover, you must keep one thing in mind that never invest more money than what you can afford to lose while trading bitcoins.
Select the best wallet
The bitcoin wallet is essential for trading bitcoins as, without it, you cannot make transactions. So, before you being with bitcoin trading, you need to choose a suitable wallet for you. There are different types of wallets, and each one of them offers varying features. There are mainly two types of wallets; cold wallets and hot wallets. A hot wallet refers to the online bitcoin wallet, which allows you to access bitcoins over the Internet and make online transactions. It is highly accessible and offers great convenience.
On the other hand, a cold wallet is an offline bitcoin wallet that is not connected to the Internet. It offers a high level of security as, without an internet connection, there is no risk of online thefts or hacking. Each trader has different needs and requirements, so you must choose a wallet that fits all your needs and requirements perfectly.
Bitcoin is a modern currency that has no physical appearance. You can use it to make quick and convenient online transactions at minimum cost. It is based on blockchain technology, which allows you to make the peer-to-peer transfer between the users on the bitcoin network.
If you are looking for the perfect bitcoin trading software, you must visit bwceventnow. Bitcoin has great potential, and there are high chances that it may replace fiat currency in the future. Some of the most fantastic benefits of using bitcoins are as follows.
Minimum risk of frauds
Nowadays, frauds have become quite common, especially when it comes to online transactions; there is a massive risk of facing fraud. So, if you want to stay protected from such risks, you must use bitcoins. Bitcoin allows you to make quick transactions without worrying about getting any sensitive information leaked.
Bitcoin offers an excellent level of anonymity and allows users to make hidden transactions without revealing their real identity. It minimizes the risk of a data breach and allows you to make transactions with great ease and comfort. If you want to make online purchases without any fraud risks, there is no better option than bitcoin.
Easy to use
Bitcoin is immensely easy to use as you can make transactions with it anytime and anywhere. It is accepted all over the world, which allows you to make international transactions from any corner of the world without contacting a bank or any financial institution. You can simply store them in digital wallets and access them anytime via the Internet. It saves a lot of time as there is no need to wait for several hours to get approval from the bank or complete the formalities.
All you need is an internet connection and a device to make a transaction. Moreover, you can also store bitcoin ins a USB flash drive (hardware wallets), which you can carry with you and send or receive bitcoins anytime with great convenience.
Make tax-free purchases
Usually, when you make an online purchase, you need to pay different taxes on it, which increases the total amount. One of the most significant benefits of using bitcoin is that it attracts no taxes. It is a decentralized cryptocurrency, so you can use it to make purchases without paying any extra taxes on it. It is the perfect option for those who want to evade some tax and save some money. The best thing about it is that it is entirely legal to evade tax using bitcoins.
With minimum taxes, you can purchase expensive luxury items without worrying about the heavy taxes that you usually need to pay on them. There are no taxes imposed on bitcoin transactions, which is a massive advantage.
Minimum transaction charges
If we talk about traditional payment methods such as credit card, there are certain charges imposed on each transaction you make through it. The more significant transaction you will make, the higher charges you will have to pay.
It is the primary reason that bitcoin is considered to be better than traditional payment methods as it allows users to make a transaction without paying even a single penny as transaction charges. Transaction speed is almost instant, and it also allows you to save some money.
You need not visit any bank and complete several formalities, which is the primary reason that people are preferring bitcoin for making transactions. Moreover, there is zero requirement of any formalities, documents, and it also minimizes the delays due to server errors.
Anonymous transactions
Nowadays, everyone wants to maintain their privacy and make transactions with high anonymity. With traditional payment methods, it is almost impossible as banks have a proper record of all your transaction. The government can easily track you through your transaction history, which makes it a challenging task to make private transactions. If you want to enjoy maximum anonymity, bitcoin is the perfect currency as it allows you to make confidential transactions that are untraceable.
Bitcoin transactions are recorded in the blockchain, which is a public ledger, but no personal or financial information is added to it. It allows you to make confidential transactions which are the primary reasons some people use bitcoin for illegal purchases as there is no possibility of being caught.
Twitch users may now face punitive measures by the service for actions that occurred offline, on other platforms, or before they started using Twitch at all.
This follows up on updates to the service’s Hateful Conduct and Harassment policy, which took effect Jan. 22, as well as the November appointment of Angela Hession, former head of gaming safety and trust at Microsoft, as VP of trust and safety at Twitch.
Under the new rules, Twitch may take action against users of its service for “hateful conduct or harassment” that occurs off Twitch’s services, when directed at or committed by members of the Twitch community and when there is “available, verifiable evidence” on the subject.
Toward the latter end, Twitch has created a dedicated email address, OSIT@twitch.tv, for confidential reports of misconduct by holders of a Twitch account. It has also brought on an unnamed law firm with expertise in conducting independent workplace and campus investigations, and increased the size of its internal law enforcement response team.
In January, we began enforcing our updated Hateful Conduct and Harassment policy so we could better protect every person on Twitch.
Today, we want to share our plans for how we’ll handle incidents that happen off Twitch.
There’s room to be cynical here. Twitch has been notorious in the past for inconsistent or wholly absent enforcement within its community, such as how it took until 2020 for Dr. Disrespect to eat a perma-ban. Last summer, a number of partnered Twitch streamers stopped broadcasting for 24 hours under the hashtag #TwitchBlackout, in protest of Twitch’s lack of action against discrimination and harassment on its platform.
However, its new policies represent a big jump in overall content moderation for Twitch, even in comparison to comparable sites such as YouTube, and is unique in targeting offline behavior at all.
Twitch’s blog post is careful to state that the misconduct targeted by this measure is anything that poses a “substantial safety risk” to Twitch’s users and community. Examples include, and are not limited to, violent extremism, terrorist recruitment, leadership or membership in known hate groups, sexual assault, and credible threats of violence against Twitch itself or its staff.
Twitch further encourages its users who have run into this kind of behavior by other Twitch users to file reports with law enforcement as a first step, rather than simply getting the offender kicked off of Twitch and calling it good.
If someone’s found to have violated these guidelines, punitive measures by Twitch can begin with an indefinite suspension for a first offense. The policy does go on to specify that a person who’s found to have committed a relevant offense, such as a “form of severe abuse,” can get their account terminated, and will subsequently be prohibited from registering a new account.
Twitch further promises that it will only take action when it’s been given evidence for an account holder’s actions, such as screenshots, interviews, video, or police reports, and when its first- or third-party investigators have been able to verify that evidence.
Given its meteoric rise and increasing prominence over the course of the last year, it may be that Twitch is finally getting its act together here… or it may just be trying to stave off disruption as more American legislators continue to attack Section 230.
Following the latest Series G round of funding in the amount of $110 million, NoSQL database vendor Redis Labs is worth more than $2 billion, the company announced today. That additional venture funding will help the company to continue positioning the product as a versatile real-time data platform for the cloud.
Redis Labs is the commercial vendor behind the open source Redis database, which built a reputation early in its existence for providing a very fast, in-memory key-value store. The high-performance cache remains central to the Redis offering, but in recent years, Redis Labs has broadened its repertoire as a multi-modal database.
Redis Enterprise includes modules for search, time-series, graph, JSON, AI and Bloom (for a probabilistic data). Similarly, Redis has adopted additional data structures beyond the basic key-value pairs, like strings, sets, lists, hashes, bitmaps, hyperloglogs, geospatial indexes, and streams.
At the same time, Redis Labs has been particularly attuned to the shift to cloud deployments, which it sees as a good fit for its database, particularly with its ability to run as a cloud-native cloud service atop Kubernetes. The company is positioning Redis Enterprise as a way to manage data in hybrid and multi-cloud deployments, with five 9 availability and support for active-active deployments.
Redis Enterprise offers several modules
This speed and versatility enables Redis Enterprise to fit into a range of use cases, including things like fraud detection, real-time inventory management, and posting of leaderboards in industries like financial services, retail, and healthcare. All told, Redis Labs claims to have more than 8,000 paying customers, including 31% of the Fortune 500. With a trailing three-year CAGR of 54% and a net retention rate north of 120%, indicating a successful “land and expand” strategy.
This growth makes Redis Labs interesting to investors. Before the funding round announced today, the Mountain View, California-based company had raised $245.6 million across eight funding rounds, according to Crunchbase. Bain Capital Ventures, Viola Ventures, Goldman Sachs, and Francisco Partners invested the bulk of the capital going back to the Series A in 2013.
Today’s Series G funding round of $110 million was driven by Tiger Global, a New York City firm that invests in public and private companies. It also included participation from another new investor, SoftBank Vision Fund, and TCV, which was an existing investor. Those three firms also acquired additional ownership as part of a $200 million secondary transaction, the company announced.
Redis’ in the speed and cloud departments were central to the investment decision of Tiger Global Management, indicates John Curtius, a partner at the firm.
Redis customers
“Companies are increasingly looking for ways to leverage the efficiency and flexibility of the cloud to drive their business forward and Redis Labs is the best partner for them in this journey,” Curtius stated in a press release. “Redis Labs has developed a real-time data platform to solve for low-latency requirements of business-critical applications and the go-to-market strategy to succeed alongside the cloud hyperscalers.”
It’s all part of Redis plans to become the de facto real-time data platform for cloud and on-prem deployments, says Ofer Bengal, Redis Labs co-founder and CEO.
“We founded Redis Labs with the idea that the future of the database market would be defined by performance, where Redis excels,” he stated in a press release. “Through the dedication of our team, Redis has become an enterprise-grade data platform to tackle nearly any real-time use case across every industry.”
Redis is gearing up to host its RedisConf 2021in two weeks. The conference, which will be held April 20 and 21 online, is expected to attract 5,000 Redis community members. To register for the event, which will feature a hackathon with $100,000 in prize money, go to redislabs.com/redisconf/.
Icosavax creates virus-like particles with technology licensed from the University of Washington’s Institute for Protein Design. (Icosavax Photo)
New funding: At a moment when vaccines are on everyone’s mind, a Seattle-based biotech company that has an unusual approach to vaccine production has raised $100 million in a Series B round. The startup creates computer-designed, virus-like particles that are used in vaccines to trigger immune responses. The new funding follows a $51 million round in October 2019. Icosavax has 17 employees.
More on the vaccines: Icosavax is working on a COVID-19 vaccine, as well as vaccines to prevent less well known diseases: respiratory syncytial virus (RSV) and human metapneumovirus (hMPV). Early research results show that the vaccines elicit strong immune responses.
“Based on preclinical data, we believe our vaccine candidates could offer significant protection against leading viral causes of pneumonia in older adults where no licensed vaccines currently exist,” said Icosavax CEO Adam Simpson, regarding the RSV and hMPV vaccines.
The company launched its COVID program in October, supported in part by $10 million from the Bill & Melinda Gates Foundation.
Strong ties to the Institute for Protein Design: The company is a spinout from the University of Washington’s Institute for Protein Design (IPD). The institute is led by David Baker, who is an Icosavax co-founder and advisor. Icosavax’s virus-like particle technology was invented at IPD by Neil King, who serves as chair of the startup’s scientific advisory board.
Simpson was formerly the CEO of PvP Biologics, another IPD spinout, which he oversaw from the company’s launch through its sale to the pharmaceutical company Takeda.
Investors: The investment round was led by RA Capital Management and joined by Janus Henderson Investors, Perceptive Advisors, Viking Global Investors, Cormorant Asset Management, Omega Funds and Surveyor Capital. Others contributing to the round include existing investors Qiming Venture Partners USA; Adams Street Partners; Sanofi Ventures; and ND Capital. The round included cash from a October 2020 funding.
Seattle-based travel giant Expedia Group named Patricia Menendez-Cambo, deputy general counsel of SoftBank Group and general counsel of the SoftBank Latin America Fund and SoftBank Opportunity Fund, to its board of directors.
Menendez-Cambo fills a vacancy created by the resignation in Januaryof longtime board member A. George “Skip” Battle. She joined the board’s audit committee, satisfying a Nasdaq rule requiring three independent directors on the audit committee, Expedia said.
Expedia Group, based in Seattle, includes travel brands such as Vrbo, Orbitz, Hotwire, Trivago, Hotels.com, and Egencia in addition to the flagship Expedia.com. The pandemic took an extraordinary toll on the company’s business amid the massive slowdown in global travel. Annual revenue fell 57% to $5.2 billion in 2020, and gross bookings fell 66% to $36.7 billion.
In a statement released by the company, Menendez-Cambo described travel as “critical to the global economy,” saying that she’s excited to join the Expedia Group board “during an era of recovery and transformation for the travel industry.”
Prior to joining SoftBank, Menendez-Cambo was an executive at Greenberg Traurig, serving in roles including vice chair and member of the law firm’s executive committee. She holds a law degree from the University of Pennsylvania Carey Law School and a business degree from the University of Miami.
Expedia’s board has 14 members, including chairman and senior executive Barry Diller; Clinton Foundation vice chair Chelsea Clinton; Uber CEO and former Expedia Group CEO Dara Khosrowshahi; and Open AI CEO Sam Altman; among others. Vice chairman Peter Kern took over last year as Expedia Group CEO following the ouster of previous CEO Mark Okerstrom in December 2019.
A Google office building in Seattle’s South Lake Union. (GeekWire Photo / Kurt Schlosser)
Google has announced plans to start bringing workers back to the company’s offices in the Puget Sound region, including on campuses in Seattle’s South Lake Union neighborhood and in Kirkland, Wash.
Some buildings in those areas will be opening on April 20. The plan is to operate at less than 20% capacity, the company told GeekWire Wednesday, and employees will have an opportunity to reserve desks if they want to go into the office.
A spokesperson added that returning to in-person work is purely optional and everyone still has the option to work from home until September. Offices will look a bit different than they did a year ago, but where possible there will be meals, snacks, and other amenities, the company said.
Mountain View, Calif.-based Google employs 6,300 people in Washington state, and its Seattle-area offices were the first to transition to a work-from-home model in the U.S. at the beginning of March 2020 when the coronavirus first arrived in the region. Those offices are now some of the tech giant’s first to reopen.
“Offices will begin to open in a limited capacity based on specific criteria that include increases in vaccine availability and downward trends in COVID-19 cases,” Fiona Cicconi, Google’s chief people officer, said in an email to employees last month, according to The New York Times. “We advise you to get a vaccine, though it will not be mandatory to have one in order for Googlers to return to the office.”
After Sept. 1, Google will require employees to formally apply for more than 14 days per year of remote work, CNBC noted, and the company expects employees to come to the office three days a week.
Google’s expanding campus at Kirkland Urban, across Lake Washington from Seattle, in December 2020. (Google Photo)
Google operates a substantial engineering center across multiple locations in the Seattle area, including an expanding campus in Kirkland and a complex in South Lake Union in the shadow of Amazon’s headquarters.
The company announced last month, even as the pandemic has upended traditional work and in-person requirements, that it is still growing its physical footprint across the U.S. It plans to invest $7 billion in offices and data centers across states this year and will create 10,000 full-time jobs in the U.S. this year.
Google has ongoing construction work at its new Kirkland Urban campus east of Seattle. It also last year signed an agreement to buy nearly 10 acres of land at a car dealership site just down the street in Kirkland.
Construction is also underway at Block 38 at 520 Westlake, one of five buildings that will make up Google’s 900,000 square-foot campus in South Lake Union.
Amazon told employees in a memo last month that it expects most U.S. corporate office workers back in the office by early fall. It was the most recent update on remote work since Amazon said employees could continue to do their jobs from home through June 30.
Microsoft started bringing employees back recently to its Redmond, Wash., headquarters campus while also releasing new details about its plans for a hybrid workplace model.
The adoption by Apache Kafka users of asynchronous APIs that return data depending on the availability of microservices and other resources is said to be on the rise. That trend has prompted API vendors to offer platforms that facilitate discovery of Kafka clusters as developers build event-driven architectures.
Among them is RapidAPI, which this week announced the beta launch of a browser-based API platform for finding Kafka instances and topics, or categories, along with viewing topic schemas and configurations. The integration with the distributed streaming platform also would allow developers to connect to microservices, REST and other APIs.
Demand for asynchronous APIs also reflects greater developer participation in a booming “API Economy,” with more than 60 percent of survey respondents telling RapidAPI they used more APIs last year. Seventy-one percent of those polled said they expected to use more app interfaces in 2021.
“APIs are getting siloed” and the growing ecosystem is becoming a “Wild West,” said Iddo Gino, RapidAPI’s founder and CEO.
“We’ve seen the adoption of these asynchronous types of APIs and flavors of APIs rising, and what we’re bringing is the ability to surface those APIs” and related documentation, he added. The platform helps “developers discover and connect those asynchronous APIs more easily,” Gino told Datanami.
Source: RapidAPI
The other factor is growing enterprise adoption of Kafka, which “is increasingly used as the message broker in event-driven architectures with asynchronous microservices,” Srivatsan Srinivasan, RapidAPI’s vice president for product, noted in a blog post announcing support for Kafka discovery and testing.
Being browser-based, the API platform eliminates the need to write code to determine API functions. Srinivasan said developers can simply assess the schema and interface definition, then test the send/receive function.
RapidAPI CEO Into Gino
“We see Kafka services as another important ‘API type’ for development teams,” Srinivasanadded. The message broker also serves as the communications interface for emerging asynchronous microservices architectures.
According to a RapidAPI survey of about 1,500 developers with varying degrees of coding experience, the number of AsynchAPIs in production tripled last year to 19 percent of deployments.
With enterprise adoption growing, Kafka clients are being used to develop distributed applications and microservices that process streams of events. Srinivasan added that rapid adoption has created a new set of challenges, including how to determine availability of Kafka clusters and what defined topics they contain.
To address the disconnect, RapidAI said Wednesday (April 7) it is adding support for Kafka as another API type along with REST, SOAP and GraphQL APIs on its enterprise hub and marketplace.
The AsynchAPI platform seeks to boost support for both Kafka providers and consumers alike. For production, the tool makes it easier to expose Kafka topics for consumption. Easier access to Kafka topics is promoted as streamlining design services, allowing developers to stream data faster.
AsynchAPIs and GraphQL saw the biggest increases in adoption and use in production in 2020, according to the vendor survey. RapidAPI added support for GraphQL last year.
The addition of Kafka and AsynchAPI support “is part of a mission of really being able to have every type of API…living in one centralize place,” said Gino.
The combination of the Kafka message broker and asynchronous APIs allows users to, for example, add a task to a data request queue, then be notified when a computing-intensive job like video encoding is completed.
In February, the San Francisco-based company acquired API design and collaboration tool vendor Paw. The deal aims to extend RapidAPI’s open platform across the app interface development lifecycle.
An artist’s conception shows one of the buildings currently under construction at Redmond Ridge Business Park. (Illustration via Kidder Mathews)
SpaceX is leasing a 124,907-square-foot building complex that’s under construction in Redmond Ridge Business Park, east of Seattle, according to the latest industrial real estate market report from Kidder Mathews.
Kidder Mathews, which listed the property for lease, says construction is slated for completion this fall.
The construction site, which takes in the business park’s Buildings 4 and 5 and offers up to 300 extra parking places nearby, is just a block away from SpaceX’s existing facilities at Redmond Ridge. Those facilities serve as the headquarters for SpaceX’s Starlink satellite development and manufacturing operation.
Eventually, SpaceX aims to provide global broadband internet access via a network of thousands of Starlink satellites in low Earth orbit. More than 1,400 satellites already have been launched — including 60 that were sent into orbit today — and Starlink has been gradually expanding its “Better Than Nothing” beta offering.
The new lease arrangement appears to signal a significant expansion of SpaceX’s Redmond footprint. We’ve reached out to SpaceX and will update this report with anything we hear back.
Kidder Mathews’ pamphlet about the property, just off Northeast Novelty Hill Road, suggests that office and/or manufacturing facilities can be built to suit the tenant. Building 4 would take in 57,207 square feet, while Building 5 would add 67,700 square feet.
SpaceX has been occupying at least three other buildings at Redmond Ridge Business Park, and that’s not the only satellite development center in Redmond: The 219,000-square-foot headquarters for Amazon’s Project Kuiper, which has its own plans for a broadband satellite constellation in low Earth orbit, is a 10-minute drive away at Redmond Commerce Center.
— Former Amazon Web Services and Microsoft executive Teresa Carlson will join Splunk in the new role of president and chief growth officer effective April 19. She is currently vice president of worldwide public sector and industries at AWS.
Carlson was promoted last year, following the departure of another AWS vice president Mike Clayville. At Amazon she oversaw the worldwide public sector business, along with sales to the healthcare and financial industries. Prior to AWS, she spent nine years at Microsoft leading sales to federal government agencies.
Splunk is a publicly-traded big data company based in San Francisco with more than 6,000 employees worldwide. The company’s revenue for the full year ended Jan. 31 was down 5% to $2.23 billion.
— Uber board member and former Xerox CEO Ursula Burns joined the advisory board of contract management startup Icertis.
Burns served as CEO of Xerox for seven years and as chairwoman for another seven, stepping down in 2017. During the Obama administration, she helped lead the White House’s national STEM program.
Headquartered in Bellevue, Wash., Icertis is valued at $2.8 billion and sells its contract management software in 90 countries.
Elizabeth Solomon. (Overstock.com Photo)
— Amazon’s former Head of Global Private Brands Marketing Elizabeth Solomon is now CMO at Overstock.com. Prior to Amazon, Solomon was vice president of marketing for Samsung Electronics America and held marketing roles at Walmart and Cadbury. She is based in Seattle.
“I’m excited about the opportunities ahead and helping further amplify Overstock’s reputation as the premier online shopping destination for home furnishings,” Solomon said.
— D2iQ, a San Francisco-based hybrid cloud startup, appointed former AWS executive Karl Triebes as VP of product.
Triebes is based in Kirkland, Wash., and was previously a vice president and general manager at Amazon Web Services. He was also previously CTO at network infrastructure and security technology company F5 Networks.
Dave Heiner. (Truveta Photo)
— Healthcare data startup Truveta, which emerged from stealth mode last year, announced Dave Heiner as chief policy officer and general counsel. Led by former Microsoft executive Terry Myerson, the Seattle-area startup also announced CMO Lisa Gurry will now be chief operating officer.
Heiner is the latest Microsoft veteran to join Truveta. He spent more than 25 years at the Redmond, Wash.-based company, most recently as a strategic policy advisor. He co-founded Microsoft’s internal AI advisory board.
“I’m grateful for the opportunity to help Truveta become a leader in ethical healthcare innovation,” said Heiner.
Truveta is building a platform with de-identified U.S. health data from its health provider partners including Providence St. Joseph Health. Myerson previously led Microsoft’s Windows and Devices Group before departing in 2018 after a 21-year career at the company.
— AI messaging platform LivePerson hired Monica Pool Knox as SVP and chief people officer. Pool Knox was most recently head of human resources for Microsoft’s global cloud and AI workforce. She previously held senior HR roles at Twitter, CBS Interactive and Sony.
Suraj Poozhiyi. (Clearwater Analytics Photo)
— Investment data company Clearwater Analytics appointed Microsoft vet Suraj Poozhiyil as SVP of product. Poozhiyil spent two decades at Microsoft, most recently as partner director of product management for the Dynamics 365 Connected Store.
Based in Boise, Id., Clearwater Analytics currently has more than 1,100 employees. Poozhiyil is one of 25 employees who will be based out of the company’s Bellevue, Wash., office.
— Warehousing tech platform Flexe named Ian Charles as the Seattle startup’s first CFO. Charles most recently served as CFO for carpooling network Scoop Technologies.
Earlier this year the company extended its Series C round and is seeing huge tailwinds from an increase in online sales amid the pandemic.
— Seattle event technology company eventcore announced Tim Schmanski as vice president of product and alliances.
Schmanski was most recently the chief solutions architect at Certain, an events automation platform for marketers. In this new role, he will be responsible for technical and agency partnerships as well as eventcore’s product roadmap.
“As companies begin to bring in-person events back into their schedules, we know that all of the proper pieces need to be in place to ensure that the event management solutions behind them exceed expectations,” said Schmanski.
— Mapping startup Unearth Technologies hired Kristine Hopkins as VP of sales and marketing. She most recently was a VP at Bluebeam, a construction collaboration technology company, where she worked for more than a decade.
— Cloud services provider BitTitan hired Khan Klatt as director of engineering. He was most recently VP of technology for global nonprofit Committee for Children and previously a senior director of web application engineering for McGraw-Hill Education.
— Bellevue, Wash.-based Blueprint Technologies promoted Melissa Benton to chief of staff. Benton joined the technology consulting firm last year as managing director of operations.
If you’re in the market for a modern, S3-compatible object storage system that works in multi- and hybrid-cloud environments, you may want to keep your eyes on MinIO, which today unveiled a trio of software updates that enhances its integration with Kubernetes and bolsters its enterprise capabilities.
The MinIO object storage system is the brainchild of AB Periasamy, who set out several years ago with the bold goal of “solving” storage. Periasamy, who co-developed the Gluster distributed file system nearly 20 years ago and is the CEO of MinIO, has not achieved that goal just yet, as AWS’s S3 remains the dominant force in petabyte-scale storage. But when you consider that half of the Fortune 500 are MinIO users, then you realize that MinIO is right in the thick of it.
MinIO’s enterprise story improves with today’s announcement, which includes a new Kubernetes operator that simplifies operation; a new operations GUI called the MinIO Console to go along with the existing command line interface; and another GUI called SUBNET Health for monitoring the cluster, fine-tuning performance, and assisting with support calls.
(Piotr-Swat/Shutterstock)
The new Kubernetes operator will not only reduce the technical skills required to operate a MinIO cluster in a Kubernetes environment, but it will also enable organizations to ramp up their use of MinIO environments in a self-service manner, Periasamy says.
“With the introduction of the operator, it’s not just productizing all the operational skills into the system,” he tells Datanami. “With the MinIO operator, you can actually have a multi-tenant, self-service cloud. It’s very much like Amazon and AWS. Once you deploy MinIO in Kubernetes, your customers can come in and self-service. Different applications teams, different departments actually will run their own cluster in their own namespace, sharing the underlying physical infrastructure.”
Previously, MinIO supplied a single Helm chart. After watching the MinIO open source community try to develop an operator that matched MinIO’s specific requirements and functions with the Kubernetes orchestration software, Periasamy decided that it would be best if the folks at MinIO developed a product.
MinIO’s operator is available on all public clouds, and supports the specific Kubernetes distributions used on them, including Amazon’s Elastic Kubernetes Service (EKS), Azure Kubernetes Service (AKS), and Google Kubernetes Engine (GKE). It also supports Anthos. The Kubernetes operator is freely available under MinIO’s AGPL v3 open source license.
Jonathan Symonds, MinIO’s chief marketing officer, says the new Kubernetes operator puts more distance between MinIO and the other object storage systems. The company’s main competition at this point is AWS and S3, he said.
“You see Pure Storage with the Portworx acquisition, that’s an attempt for them to become more relevant in a Kubernetes world,” Symonds says. “We’re already there. So we have a huge head start. And the operator just puts more and more pressure on those vendors to change their methodology, change their approach, and try and become more Kubernetes native. And I think it’s going to be hard for them. This is a tough spot to be in if you’re not Kubernetes and cloud-native to start with.”
The new MinIO Console, which is offered under the AGPL v3 license as well, will also simplify the deployment and management of MinIO in an enterprise environment.
Up to this point, MinIO has primarily been used by the DevOps community, and those folks are primarily used to working with a command-line interface, Symonds says. But as MinIO expands its reach into the enterprise, it needs to be more friendly to other folks, including IT professionals, and that’s why a graphical user interface (GUI) was needed.
“As we expand the map, we spend more time with IT,” Symonds says. “They’re looking for different interfaces. So the challenge for us is really to build one that has same granularity, the same control, the same functionality, but do so in a GUI. So that’s what MinIO console does.”
With just a few mouse clicks in the MinIO Console, users can provision a multi-tenant object storage-as-a-service environment. All of the functions that are available through the command line are also available through the new console, Symonds says.
“We spent a lot of time figuring out what’s the right interface, what’s the right sequence of events to make sure that this was a stupid simple, but at the same time very, very powerful. That’s a key build,” Symonds says. “It really expands the audience for somebody who may not even know how to spell Kubernetes–they can still deploy object storage as a service.”
MinIO SUBNET Health
SUBNET Health provides even more insight into the MinIO environment, and even the underlying hardware that the object storage system runs on. The software is only available to customers who subscribe to MinIO’s commercial support program, called SUBNET.
The new software helps to automate root cause analysis by inspecting various components involved in a MinIO cluster, including hard drives, network, CPU, memory, operating systems, containers, and MinIO software components. The software also helps customers quickly get help from MinIO support specialists, who can use SUBNET Health to try and track down issues.
“It’s an enterprise-class interface and was really designed to speed issue resolution, even for non-MinIO problems,” Symonds says. “The depth of information that we get really allows us to determine, relatively quickly, where the problem lies.”
MinIO will lean on SUBNET Health to help it scale its business. The company has close to 100 paying customer at the moment, and the goal is to grow that number fairly quickly to 1,000. “We don’t want to throw people at the problem,” Periasamy says. “SUBNET Health basically automates our work.”
John Hardy has been programming since Turbo Pascal 6. His equation visualization application (Equation Solver) was one of the showcase entries at the Delphi 26th Showcase Challenge and he talked to us about his Delphi adventures throughout the years as a programmer. Visit the Equation Solver website for more information.
When did you start using RAD Studio/Delphi and how long have you been using it?
I started with Turbo Pascal 6 and only switch over to Delphi when version 2 was released
What was it like building software before you had RAD Studio/Delphi?
When I was a student in Mechanical Engineering, we were taught Basic as a language. Some time later I became a lecturer at a Polytech (Technikon). The electrical students were being taught Turbo Pascal. At this time the lecturer in the electrical department convinced me to switch to Turbo Pascal 6 which was a huge improvement on Basic especially in terms of de-bugging and the graphical interface. Once Turbo Pascal became redundant I switched to Delphi 2. This was again a jump in technology and took some time to get used to how it worked. However, once I got used to how things worked there was no going back. I particularly liked the code insight which made debugging very simple.
How did RAD Studio/Delphi help you create your showcase application?
Early on with Delphi I found a book by Ray Kanopka on creating components. Some equations in Mechanical Engineering cannot be directly solved. HP calculators at the time could solve most equations. As one does, I wanted to know how! Delphi’s part in this was the tool that helped me achieve the goal of creating a component that could solve equations for real roots.
What made RAD Studio/Delphi stand out from other options?
Over time I have tried C# and visual studio. I feel most comfortable with Delphi, probably because I have spent so much time with this software.
What made you happiest about working with RAD Studio/Delphi?
I think the best thing about Delphi was the ease at which fully functional programs could be created and deployed.
What have you been able to achieve through using RAD Studio/Delphi to create your equation visualization application?
The type of projects I have been working on are not main stream. For me, as an amateur, the user friendly interface and ability to do anything makes Delphi the perfect choice.
What are some future plans for Equation Solver, your showcase visualization application?
I want to extend the equation solver to be able to solve for complex roots. Also I am working on an equation writer which allows equations to be entered in a more natural way – not as long string.
Thank you, John! Check out the link below to view his application submission in the Delphi Challenge.
The news: T-Mobile made a series of announcements Wednesday as part of its latest ‘Un-carrier’ initiative, including the official launch of its new home internet service, 5G phone offerings, and new investment in rural areas.
T-Mobile Home Internet: After piloting a home internet service powered by its wireless network, T-Mobile Home Internet is now available to more than 30 million U.S. households. It costs $60 per month — $10 more per month than the pilot program — with average expected speeds of 100 Mbps for most customers and an included 4G/5G gateway device. With the new service, T-Mobile is taking on internet incumbents as well as other wireless carriers such as Verizon which also offer competing services.
T-Mobile Hometown: The Bellevue, Wash.-based company will build hundreds of new retail stores and create 5,000 jobs in small U.S. towns. It is also adding “Hometown Experts” to towns where it can’t build a store, and committing $25 million over five years to fund community development projects in rural areas. Connectivity in small towns is a hot topic as the pandemic highlighted service gaps across rural America, in addition to President Biden’s new infrastructure plan.
5G phones: T-Mobile said it will let postpaid customers trade in any old phone in working condition for a new Samsung Galaxy A32 5G smartphone for free after 24 monthly bill credits. T-Mobile is looking to capitalize on its 5G wireless network, expanded thanks in large part to its Sprint merger last year. T-Mobile CEO Mike Sievert said previously that T-Mobile has a 5G edge over other carriers because of its mid-band strategy.
Amazon is probably not smiling this morning about an apparent protest movement started by a driver who is placing cardboard boxes upside down during deliveries to make it look like the company’s logo is a frown.
In a post on Reddit, user AugustaSummerz discussed what they called their own movement — “No More Smiles With Amazon.” The post is accompanied by a collage of images showing packages in front of doorways.
The post appeared in the subreddit r/AmazonDSPDrivers, a group for independent Delivery Service Partners that has more than 8,000 members. Replies on the post, indicate that other drivers have either already been doing the same thing, think it’s a good idea, or have suggestions for other names for the movement, such as “Amazon Crime.”
Dave Lee, a correspondent with Financial Times, tweeted the Reddit image. He later replied, “Here’s hoping DoorDash drivers never try this.”
Meanwhile on Reddit, Amazon delivery drivers are discussing leaving packages upside down — a frown — as a form of protest against working conditions pic.twitter.com/XLvlFH6lWo
The frowning packages are showing up as Amazon confronts a number of issues related to working conditions for both delivery drivers and fulfillment center employees.
Amazon has been dealing with public relations fallout from reports that many drivers are unable to take bathroom breaks while working and have resorted to urinating in bottles.
The results of a unionization vote at a warehouse in Bessemer, Ala., are also due any day now and could have broad implications for Amazon and the labor movement across the United States.
GeekWire has reached out to Amazon for comment on the frowning packages and will update this story when we hear back.
As COVID-19 sent everyone retreating to social isolation and virtual gatherings, it proved particularly difficult for the performers and musicians who rely on live events to make a living. The pandemic has also made us all really tired of virtual gatherings.
Prefunq is a new service designed to combat so-called “Zoom fatigue” and put artists back to work in front of an audience — even if the audience is still virtual as we all wait for the day when clubs and concerts become a real thing again.
Prefunq founder Tim Keck. (Photo courtesy of Tim Keck)
Who’s behind it: Tim Keck is the former publisher of The Stranger newspaper in Seattle and later president of Index Newspapers, where he also ran the Portland Mercury. Since November he’s been focused solely on EverOut, a spinout website derived from the events listings of those two publications.
Keck started EverOut’s Prefunq as a way to support artists who were struggling with an abrupt loss of income, and as a way to spice up his own staff meetings. Through friends and contacts, he started helping to facilitate virtual performances for a variety of businesses that were conducting gatherings online.
“The beginning of COVID was so depressing for everybody,” Keck said. “You have these all-staff meetings and you’d be staring at all these squares. I just started hiring artists. And it turned out to be kind of transformative. It had a much better vibe and people were ready to actually talk.”
How it works: Prefunq enlists a roster of solo performers, whose bios and song clips are viewable on the website. A business or whomever else might be holding an online meeting uses the website to select an artist — or let Prefunq select for them. Prefunq asks for a description of the event, start and end time and how many people are expected.
The artist performs a couple songs in a 10-15 minute min concert inside the Zoom call — and there are no requests. Pricing is $175 for up to 25 people and the price goes up for different group sizes.
“It’s this amazing way for people who work at a company, on a team, to have this really intimate show where they can discover new artists,” Keck said. “The artists can connect with new audiences that they’ve never heard before. The company shows that they care about their staff and supports the arts.”
Featured performers on Prefunq. (Images via Prefunq)
The performers: A variety of singer/songwriters and instrumentalists are available through Prefunq, depending on whether your virtual happy hour needs a piano player or soulful vocalist or you just want some spoken poetry to open an investor meeting.
Seattle jazz guitarist Greg Ruby has performed on Prefunq a couple times, including for a Zoom happy hour for Redmond, Wash.-based Heinz Marketing. He doesn’t find it odd at all to play to a live audience via his computer and says he follows comments like they are applause. As a professional musician, half of his income is derived from live performances and half from teaching at music camps around the country. All of that work disappeared a year ago when the pandemic hit. “Several months into the pandemic, I started teaching guitar lessons online via Zoom,” Ruby said. “This allowed me to get comfortable with being behind the screen.” Teaching one on one online has allowed him to reach a wider national student base, and when Prefunq contacted him he was already adept at performing online. He said the Prefunq gigs have helped “immensely.”
Composer/bassist Evan Flory-Barnes lost about two thirds of his livelihood during the pandemic. He was able to do some outdoor performing last summer and fall, but said Prefunq has definitely helped and come through “at the right time.” Flory-Barnes has performed three gigs and says that while it can feel a little awkward to communicate across Zoom sometimes, it gets better as he dives into it. “I can feel how moved people are and that always feels good. I notice how good I feel after performing every time,” he said.
The meeting holders: Prefunq is catering to bosses and HR pros looking to inject culture, community and creativity into workplaces that have been turned upside down by the lack of in-person gatherings.
Jen Haller, people operations manager at Seattle-based Attunely, said she loves the opportunity to support local, emerging, BIPOC artists — especially during the pandemic when new artists’ opportunities are so limited. Prefunq is exposing the Attunely team to art that they might not independently seek out for themselves, Haller said, while also breaking up the monotony of traditional Zoom interactions. “Our team was so happy to have some variety in their day of back-to-back Zooms. Bringing art into the workday helps build great team culture by acknowledging that there’s more to life than just work,” Haller said.
Seattle consultancy Intentional Futures has used Prefunq twice, including a performance at an all-hands huddle on a Monday morning where singer/songwriter Shaina Shepard joined at 9 a.m. in her PJs, with her coffee, sang some songs, and told some stories. Founder and CEO Michael Dix said that like just about every leader during the pandemic, he’s been looking for new ways to foster meaningful connection between his employees, maintain a healthy culture, and break up the monotony and isolation of working remotely 100% of the time. “Feedback so far has been great,” Dix said of Prefunq gigs. “Just like any live show, adding Prefunq into the mix creates energy, shared experience, and a break from the expected.” Andy Buffelen, head of people and culture for the company, added, “In a time and world where digital interfacing is the norm, it was such a lovely change of pace to have these artists come and join us.”
Traction: EverOut employs seven people right now, including a new hire to handle Prefunq coordination. The service has facilitated 45 performances so far and Keck said it’s growing and he plans to add more artists. Right now, 100% of proceeds have gone to artists, but eventually the plan is to take a cut in the neighborhood of 25%.
Online and in-person future: The pandemic changed where and how we work and Keck expects that some parts of that, like hybrid workplace models, will stick around. And that’s good news for artists comfortable with playing to virtual audiences.
“What I hope is that everything opens up and people are still doing Zoom meetings and they want to keep it interesting and so these artists have a new way to meet new audiences as well as to make some more money,” Keck said. “If that works out that way I’d be just so thrilled.”
Flavius Fernandes has been using Delphi since Borland Delphi 5. His showcase entry (ERP Sirius +Mobile) is featured at the Delphi 26th Showcase Challenge and we interviewed him to learn more about his Delphi journey. You can learn more about his application at the ERP website.
When did you start using RAD Studio/Delphi and have long have you been using it?
I have been using Delphi since Borderland Delphi 5, developing various types of business applications since 2000. Our business application is developer in RAD Studio 10.3.3
What was it like building software before you had RAD Studio/Delphi?
I started developing software using COBOL, RPG, Basic, and Clipper. I looked at many developed languages at the time (and still do). Nothing comes close to my requirements that included ease to use, code editor, visual designer, an integrated debugger, and support for third-party plugins. RAD Studio allows me to develop rapidly, from prototypes to a stable progressive state. It greatly reduces development time, allowing me to spend more time on other objectives.
How did RAD Studio/Delphi help you create your showcase ERP application?
Delphi has been a great development tool in helping me develop ERPSirius +Mobile. DataSnap is used for our client-server functionality. FireDAC is used to allow us to offer all the major enterprise databases with our application. I like the way DataSnap and FireDAC work together. FireDAC JSON Reflection with TFDMemTable is great for creating desktop or mobile front-ends using REST. Using TFDConnection and TFDQuery is great for the back-ends. It’s important that our applications are aesthetically pleasing to the users and VCLStyle/FireMonkey styles allow me to make that happen. The visual designer is great for fast prototyping. One can evolve UI/UX as the application matures very easily. Many components can be used to further reduce development time and add great functionality.
What made RAD Studio/Delphi stand out from other options?
Object Pascal is easy to learn and the Delphi IDE just keeps improving. RAD Studio/Delphi has everything one would need to develop great applications, a code editor, a visual designer, an integrated debugger and native Component Object Model (COM) support. It’s just a great tool for cross-platform development. Database support is a key feature. Delphi has a fast compilation speed and complies to native code.
What made you happiest about working with RAD Studio/Delphi?
Its rapid product development ability, the new features being added, cross-platform support. RAD Studio/Delphi allows me to build simple and improve in time. The RAD Studio online community is great for help and ideas. The debugging facilities are great. The Windows UI and VCL components keep improving and this allows me to make limited use of 3rd party components. There is always something interesting and new that makes me want to keep up with the latest version of RAD Studio/Delphi. I am at my best when I use RAD Studio to develop solutions on the fly, and collaborate with business stakeholder in real time.
What have you been able to achieve through using RAD Studio/Delphi to create the ERP Sirius +Mobile application?
To be able to develop and offer a downloadable ERP with all its advanced features is a great achievement. ERP Sirius offers modules that other ERP lacks, and the list of modules to be added will just grow, improve, and evolve. This is down to the way the application is developed and RAD Studio/Delphi is the tool that makes it happen. I could say RAD Studio bring out the artistic and creative side in a developer.
What are some future plans for your ERP application?
We have exciting plans for the future. One is hosting our demo database on Azure cloud and allowing anyone without an active license to look and interact with the business modules on offer. Our revised android app will be released as a demo this year, and new modules added to the mobile application. I am really excited about the mobile app. The key features of our mobile application is the ChimesAI framework that revolutionizes the way business information is presented and actioned by all business users. We have new logistics modules in our next release update developed and refined with our experience developing freight forwarding, warehousing, and logistics software in the past. Since the application is designed for global use, multi-language is in future plans also. I will be looking for talent, partners to take the application and my vision of the next business tool the next level. I am also thinking about a way of making some parts of the development open source. It would be great if RAD Studio developers around the world could work on our application because it is designed to be a global application. We have a new blog page on our website where we will share information about our future plans, white papers, etc. Our blog page will also allow people to share opinions and interact with us on certain topics. We take a great interest in the RAD Studio road map, and once again thanks to the RAD Studio team at Embarcadero.
Thank you, Flavius! Check out his showcase entry below.
Vijaye Raji (left) and the Statsig team. (Statsig Photo)
New startup:Vijaye Raji, a tech vet who previously led Facebook’s 5,000-person engineering outpost in Seattle, is CEO and founder of Statsig, a Seattle-area company that just formed to help developers build and launch features quickly.
Statsig’s pitch: Raji spent a decade at Facebook and learned how it used internal development tools to streamline the way new features were tested and ultimately implemented. They include Gatekeeper, which lets developers build features visible to only a targeted set of users, and Deltoid, a visual map of how critical metrics are affected by new features.
Statsig wants to bring that power to everyone.
“Big companies shouldn’t be the only ones with such sophisticated tools — it should be liberated and made accessible and available to developers, data scientists, and product managers,” Raji wrote in a blog post.
What makes Statsig different: There are a few competitors already in the market, which Raji said is good for market validation. The company differentiates itself with a usage-based pricing system tied to a customer’s monthly active user count. It includes a free tier that makes it easy for developers to try out the software. Raji said he would divulge more of the secret sauce in future blog posts.
Raji’s background: His tenure at Facebook included stints as vice president of gaming and vice president of entertainment. Raji previously worked at Microsoft for nearly a decade and was a principal software design engineer.
Investor interest: The company did not disclose funding information but at least one venture capitalist — Madrona Venture Group Managing Director and former Microsoft exec S. “Soma” Somasegar — is a fan.
What’s next: Statsig has a small team made up of former Facebook employees based in the Seattle region. It recently opened a beta version of its software and Raji said there has been a steady stream of sign-ups.
Facebook opened an engineering office in Seattle back in 2010 and employs more than 5,000 people in the area. Last year it paid $367.6 million to purchase a brand new 6-acre, 400,000 square-foot complex from REI at the new Spring District development in Bellevue, Wash., just east of Seattle.
JavaScript has a lot of useful built-in methods for string manipulation, one of these method is the split() method.
In this article we'll be taking a closer look at the split() method and how we can use it in conjunction with regular expressions to split a long string just the way we want.
JavaScript's split() Method
When the split(delimiter, limit) method is used on a string, it returns an array of substrings, and uses the delimiter argument's value as the delimiter. The delimiter argument can also be specified as a regular expression, which will then be used to search through the original string to find delimiters that match the specified expression.
Additionally, we can specify the optional argument limit, which specifies how many elements we want in our resulting substring array. Setting limit=2, for example, will yield an array that contains the first two substrings separated by a delimiter in the original string:
const str = "JavaScript is the best programming language!";
const words = str.split(" ");
console.log(words);
Here, the string will be broken down on each new word:
Now that we are comfortable with the use of the split() method, let's step it up a notch, and introduce regular expressions to the mix:
const paragraph = `The Answer to the Ultimate Question of Life, the Universe, and Everything is 42. Forty two. That's all there is.`;
// Split by words
const words = paragraph.split(" ");
console.log(words[2]);
// Split by sentences
const sentences = paragraph.split(/[!?.]/);
console.log(sentences[1]);
// Split all characters, with a limit of 2
const firstTwoChars = paragraph.split("", 2);
console.log(firstTwoChars);
// Split and reverse
const reverse = paragraph.split("").reverse().join("");
console.log(reverse);
This results in:
to
Forty two
["T", "h" ]
.24 si gnihtyrevE dna ,esrevinU eht ,efiL fo noitseuQ etamitlU eht ot rewsnA ehT
In the second example, we are passing a regular expression as the argument for the split() method.
/[!?.]/ represents a character set - ! or ? or .
Put simply, we are splitting the string at any of the specified characters.
In the third example, we are passing 2 as the second argument, limiting the resulting substring array to two elements.
In the last example, we are reversing the string using the built-in reverse() method. Because reverse() is an array method, we'll first split the original string into an array of individual characters, by using the split("") method, and then reverse() it.
Finally, we can join() the results toi create a reversed string from the array of characters.
Conclusion
In this tutorial, we took a quick look at how to split a string in vanilla JavaScript. We've gone over the built-in split() method, as well as how to use it with regular expressions.
The next two days are going to determine which of the two sides has the upper hand: Amazon, which aggressively opposed unionization in the Alabama fulfillment center, or the Retail Wholesale and Department Store Union, which has been trying to convince the plant’s workers to form the retail company’s first unionized employee group.
So what happens now?
On Tuesday at the NLRB office in Birmingham, Ala., the challenged ballots — those that either side protested — were set aside. The number of challenged ballots was estimated at 10% of the overall number of votes cast. (The total number of votes will remain a secret until the final tally on Thursday.)
The challenged votes will only be counted by the NLRB if the election outcome can be changed by counting them. In other words, if there is not enough to flip the election, they will remain sealed.
On Wednesday, the NLRB election workers will separate the validated ballots, the remaining 90%, from their envelopes. The ballots arrived in yellow envelopes containing blue ballot envelopes. The two are placed in separate piles. The blue ballots will be counted on camera in an election broadcast on video, tentatively scheduled for Thursday.
NLRB officials separate the two so that during the broadcast of the tally — the votes are held up to the camera one-by-one — the employee identification from the yellow envelope can’t be seen. This is to protect employees who fear reprisals based on their votes, according to the NLRB.
Amazon, union organizers, and the media will be able to watch the live election.
When final votes are counted, NLRB officials will announce the result, perhaps late Thursday. If the differential is within the number of challenged ballots, a final result will be delayed until the labor board can rule on the challenged ballots. If not, a winner will be declared.
However, both sides will retain opportunities to appeal to the overall election process, a step that could eventually reach federal court and leave the election undetermined for a while.
We’ll have coverage on GeekWire of the results as soon as they are announced.
Google Cloud offers a Natural Language API which allows a developer to take unstructured text as an input and utilize Google’s machine learning capabilities to derive insight from it. They have a number of different operations that can be performed on a piece of text including syntax analysis, entity analysis, custom entity extraction, sentiment analysis, custom sentiment analysis, content classification, custom content classification, custom models, and spatial structure understanding. The Google Natural Language APIs feature multi-language support, large dataset support, and give you access to Google’s AutoML models.
RAD Studio and Delphi gives you easy access to all of this Natural Language processing capability via Google’s REST API. RAD Studio includes a tool called the REST Debugger where you can configure all of your REST API settings and then export them as components into your Delphi application. This includes wiring up the incoming data automatically to an in memory database table (TFDMemTable). It literally takes only a few minutes to get up and running with Google Cloud’s powerful Natural Language API from within Delphi and RAD Studio. Additionally, the application built and the source code available at the end of this blog post uses Delphi’s cross-platform/multi-platform FireMonkey framework which supports Windows, Linux, macOS, Android, and iOS with a single codebase and single responsive UI. Let’s dive into the Google Cloud Natural Language API and how to build a desktop and mobile application utilizing it’s REST API.
What can I do with the Google Cloud Natural Language API?
On Google’s website the full REST reference for the Natural Language API is available. Here are the different endpoints available in the API:
analyzeEntities POST /v1beta2/documents:analyzeEntities
analyzeEntitySentiment POST /v1beta2/documents:analyzeEntitySentiment
analyzeSentiment POST /v1beta2/documents:analyzeSentiment
analyzeSyntax POST /v1beta2/documents:analyzeSyntax
annotateText POST /v1beta2/documents:annotateText
classifyText POST /v1beta2/documents:classifyText
How can I set up the Natural Language API credentials?
An API key is needed in order to use the above REST APIs. You will need to visit the following URL which will walk you through creating a project and enabled the Natural Language API on your Google Cloud account.
How do I connect to the Google Cloud Natural Language API REST end point with Delphi?
I built a sample application in Delphi using the REST Debugger which utilizes the analyzeEntities end point. There is also a video tutorial for using the RAD Studio REST Debugger available to automatically create the REST components and paste them into your app. The analyzeEntiries endpoint breaks down the content of the text into entities that are contained within Google’s machine learning database. Entities have their own id (called mid), a type classification (like ‘ORGANIZATION’), and contain additional meta data like a Wikipedia URL and the like to provide context to that entity.
Here are the three components in Delphi that make the API call. They are the TRESTClient, TRESTRequest, and TRESTResponse. You will notice that the API URL is set on the BaseURL of TRESTClient. On the TRESTRequest component you will see that the request type is set to rmPOST, the ContentType is set to ctAPPLICATION_JSON, and that it contains one request body for the POST which is set to:
{document: {type: "PLAIN_TEXT", content: "Embarcadero Delphi is super powerful."}}
You will also notice that on the TRESTResponse component the RootElement is set to ‘entities’. This means that the ‘entities’ element in the JSON is specifically selected to be pulled into the in memory table (TFDMemTable).
object RESTClient1: TRESTClient
Accept = 'application/json, text/plain; q=0.9, text/html;q=0.8,'
AcceptCharset = 'utf-8, *;q=0.8'
BaseURL =
'https://content-language.googleapis.com/v1beta1/documents:analyz' +
'eEntities?alt=json&key=your_api_key_here'
ContentType = 'application/json'
Params = <>
Left = 40
Top = 328
end
object RESTRequest1: TRESTRequest
AssignedValues = [rvConnectTimeout, rvReadTimeout]
Client = RESTClient1
Method = rmPOST
Params = <
item
Kind = pkREQUESTBODY
Name = 'body993652B584EA4AB59A378CFB104511F8'
Value =
'{document: {type: "PLAIN_TEXT", content: "Embarcadero Delphi is ' +
'super powerful."}}'
ContentType = ctAPPLICATION_JSON
end>
Response = RESTResponse1
Left = 128
Top = 328
end
object RESTResponse1: TRESTResponse
ContentType = 'application/json'
RootElement = 'entities'
Left = 216
Top = 336
end
What does the Natural Language API analyzeEntities endpoint return?
Here is a sample of the response JSON you will receive from the API:
Now that we’ve seen the process of how to configure the Natural Language API credentials, the REST endpoints needed, and the components in Delphi to connect to those endpoints let’s take a look at the full sample application.
How do I build a Windows 10 desktop or Android/iOS mobile device application utilizing the Google Cloud Natural Language API?
The sample application features a TMemo as a place to paste in text to be analyzed, a TStringGrid to display the results of the REST API call, and a TWebBrowser component to navigate and display the wikipedia_url property for each entity returned in the list. When the entity is selected in the TStringGrid it will load it’s Wikipedia URL in the TWebBrowser control.
The code for the application is pretty simple and consists of a button click to execute the REST request. The JSON POST content is built on the fly to take in the dynamic text from the TMemo. It also contains some additional code which sets the TWebBrowser control to utilize IE11.
procedure TMainForm.Button1Click(Sender: TObject);
begin
var JsonObject := TJSONObject.Create;
try
var document := TJSONObject.Create;
document.AddPair(TJSONPair.Create('type', 'PLAIN_TEXT'));
document.AddPair(TJSONPair.Create('content', RequestMemo.Lines.Text));
JsonObject.AddPair(TJSONPair.Create('document', document));
RESTRequest1.Params[0].Value := JsonObject.ToJSON;
RESTRequest1.Execute;
finally
JsonObject.Free;
end;
end;
In this blog post we’ve seen how to sign up for the Google Cloud Natural Language API and credentials to use the REST API. We’ve also seen the different endpoints it offers including the Analyze Entities endpoint. We’ve seen how to use the RAD Studio REST Debugger to connect to the endpoint and copy that code into a real application. And finally we’ve seen a real Windows 10 (and Linux and macOS and Android and iOS) application which connects to the Google Cloud Natural Language API and executes an entity analysis on a piece of text.
Amazon CEO Jeff Bezos. (GeekWire File Photo / Kevin Lisota)
Amazon founder Jeff Bezos today endorsed the Biden administration’s call for a higher corporate tax rate to help pay for the nation’s crumbling infrastructure.
In a statement posted on his company’s website, Bezos noted that historically both Democrats and Republicans have called for additional infrastructure spending — federal money for everything from bridges to high-speed internet. Then, he added, American companies should help pay for it.
“We recognize this investment will require concessions from all sides—both on the specifics of what’s included as well as how it gets paid for (we’re supportive of a rise in the corporate tax rate),” he wrote.
The news comes a week after President Joe Biden twice mentioned the company by name in a speech touting his infrastructure plan, saying U.S. companies such as Amazon “use various loopholes so they pay not a single solitary penny in federal income tax.”
“A fireman, a teacher paying 22% — Amazon and 90 other major corporations paying zero in federal taxes? I’m going to put an end to that,” he added.
The $2.3 trillion plan calls for a $600-billion investment in modernizing roads, improved rail cars, and buses and a national network of electric vehicle recharging stations. Additional money is earmarked for veterans hospitals, improved affordable housing, and additional high-speed broadband — which became a pressing issue following many state mandates for at-home schooling during the pandemic.
Some observers have cast a skeptical eye on Bezos’ endorsement of higher corporate taxes. In his statement, the Amazon boss didn’t specifically endorse the infrastructure plan or its 7 percent tax hike. The company historically has paid little or no taxes while using a myriad of tax credits and deductions to reduce its federal tax bill.
Also because Amazon reinvests much of its earnings, it often shows low profit margins even when posting record revenue. Amazon reported revenue of $386 billion last year and operating income of $22.9 billion, boosted by a pandemic-driven surge amid as customers relied on its online shopping and cloud computing services.
Amazon also remains under public and government scrutiny on several additional fronts including antitrust issues, and the working conditions of its lowest-paid employees. Amazon warehouse workers in Bessemer, Ala., are waiting for the final count on a vote to unionize. It would be the first unionized workforce at the online retail and cloud-computing giant.
A preliminary result is expected later this week or early next week. Amazon has fought the Retail Wholesale and Department Store Union’s efforts to organize the nearly 6,000 workers there.
A demonstration of a new system that uses video footage to collect pulse and heart rate information. (UW Photo)
At an international health conference this week, scientists with the University of Washington and Microsoft Research will virtually present new technology that allows medical providers to remotely check a patient’s pulse and heart rate.
The tool uses the camera on a smartphone or computer to capture video collected of a person’s face. That video is analyzed to measure changes in the light reflected by a patient’s skin, which correlates to changes in blood volume and motion that are caused by blood circulation.
Xin Liu, a doctoral student in the Paul G. Allen School of Computer Science & Engineering at the University of Washington. (UW Photo)
The UW and Microsoft researchers used machine learning and three datasets of videos and health stats to train its system. And as has been the case with various image and video-related machine learning projects, the technology performed less accurately among people of different races. In this case, the challenge is that lighter skin is more reflective, while darker skin absorbs more light, and the tool needs to perceive subtle changes in the reflections.
“Every person is different. So this system needs to be able to quickly adapt to each person’s unique physiological signature, and separate this from other variations, such as what they look like and what environment they are in,” said Xin Liu, lead author of the research and a UW doctoral student at the Paul G. Allen School of Computer Science & Engineering.
The researchers came up with a fix to the problem: the system requires the user to collect 18 seconds of video that calibrates the device before it calculates pulse and heart rate. The calibration phase can adjust for skin tone, the patient’s age (thin, young skin on babies and kids behaves differently from the aged skin of an older user), facial hair, background, lighting and other factors. The scientists are still working to improve performance, but the strategy greatly increased the accuracy of the system.
The use of calibration to fine-tune performance means that machine learning can be implemented with smaller datasets that might not be perfectly representative of a population.
Daniel McDuff, a principal researcher with Microsoft Research. (Microsoft Photo)
That’s good news, said Daniel McDuff, one of the co-authors and a principal researcher at Microsoft Research. Smaller datasets lead to a greater preservation of privacy as fewer people need to contribute information. It democratizes and makes machine learning accessible to a wider range of developers. It means that one entity isn’t left holding massive amounts of information captured in global datasets.
“Personalization is always going to be necessary for the best performance,” McDuff said.
The system also protects private information because it can be run entirely on a phone or other device, keeping the data out of the cloud.
The researchers next step is to test the technology in a clinical setting, which is in the works.
Shwetak Patel, a professor in the Allen School and the Department of Electrical & Computer Engineering, was a senior author of the UW research. Patel has been working for many years on technology that turns ordinary smartphones into health monitoring devices. He is the co-founder of the Senosis Health, a UW spinoff that was acquired by Google.
Other authors include Ziheng Jiang, a doctoral student in the Allen School; Josh Fromm, a UW graduate who now works at OctoML; and Xuhai Xu, a doctoral student in the Information School.
Shwetak Patel, a professor in the Paul G. Allen School of Computer Science & Engineering and the Department of Electrical & Computer Engineering at the University of Washington. (UW Photo)
The research was funded by the Bill & Melinda Gates Foundation, Google and the UW.
As digital health is riding a COVID-fueled wave of popularity and being stoked with millions of dollars in new investments, researchers are hustling to develop tech tools that can deliver more robust healthcare in remote settings.
Developments that turn ordinary tech devices into tools for healthcare are well timed to meet the growing demand for telehealth. Amazon last month said that it will expand its Amazon Care remote health service to non-employees, first in Washington state and then nationwide later this year. Seattle telemedicine startup 98point6raised $118 million in October as its membership service grows quickly amid the pandemic.
A separate group of UW researchers revealed technology last month that uses machine learning algorithms to turn smart speakers into sensitive medical devices that can detect irregular heartbeats.
“An opportunity of a lifetime.” Many businesses would not describe the COVID-19 pandemic in these words, but forward-thinking leaders are doing just that. Framing a crisis as an opportunity to reinvent their organizations has been essential for these leaders. These successful individuals will tell you that it is a growth mindset, executed with data and agility, that can help reimagine business models, reshape operations and reimagine a new future.
What makes a crisis like COVID-19 more challenging than an everyday obstacle is the multi-level disruption that is created. Consumer needs are thrown into continuous flux, work from home mandates have drastic impact on employees, supply chains, and trade routes are in turmoil. Any one of these dimensions would ordinarily pose a top-level challenge, but all at once these drivers create a crisis that requires a unique response. Yet the successful leaders are able to act with precision and focus, using a foundation of real time data to navigate with confidence.
With this acceleration of digital, consumer data has become far more critical to businesses. Previously, it was acceptable to have data collected and analyzed on a weekly or monthly basis – trends would evolve progressively over time. However, analytical teams are finding that models must now be trained and re-trained on a daily or weekly basis. Rather than throwing out analytical models, it is essential to have real-time models that enable a business to predict and respond to the fast-coming changes. The ability to have access to real-time insights is the difference between harnessing opportunistic trends and responding to customer need, versus lagging behind the competition and missing the trends and needs entirely. Companies that are able to harness real-time data and insights are those able to keep up with the fast shifts in consumer trends.
Cut Out the Waste with Efficiency and Agility
With an unprecedented rate of change of consumer needs, businesses are having to adjust their products, services and operations to adapt. As the uncertainty of the pandemic places a strain on businesses, leadership teams are looking to rearchitect their big monolithic way of running an enterprise, which no longer fits an environment that is highly dynamic. Leaders need their organizations to respond with ease and agility, to re-orient investments to stave off negative impacts and invest in new opportunity. Hence, one of the top use cases at this time is using data to improve efficiency and cut out waste, thus enabling the required agility and flexibility.
To support and drive efficiency, organizations are looking to technology. Heavily siloed data that is difficult and complex to reconcile, infrequently updated or poorly organized is no longer acceptable. Hence, leadership teams are specifically investing on consolidating data environments and migrating traditional on-premises use cases to cloud technologies that provide more agility. With cloud providing the ability to consume and pay for technology on an as-needed basis, enterprises are able to spin up new data and analytical use cases with ease, whether that be analyzing and tracking sales, deploying projects to reduce costs or adapting product and service offerings with embedded analytics.
Data As a Way of Life
Pre-COVID19, many organizations were already contending with mountains of data, but not every organization prioritized having the expertise, motivation and capacity to use it effectively. Grappling with the business impacts of the pandemic has shone a light on just how important it is to leverage data. It is data that uncovers actionable insights and drives decision-making that can define the path forward in a challenging landscape as well as enable new opportunities.
By focusing on the disruption of the COVID19 pandemic as an opportunity, leaders can reimagine their business models leveraging digitalization, reshape operations to become more agile and reimagine a new future of how their business are run. Leveraging data, organizations can navigate the quick decisions needed to pivot in response to the changing world and become more resilient. However, it must start with a fundamental growth mindset, backed by data that can provide confidence in taking action. It is the organizations who can harness data and analytics to access the growth opportunity that will withstand competition and thrive following the crisis of 2020.
About the author: As Vice President of Strategy at Teradata, Dr. Yasmeen Ahmad helps the company lead with a data-driven mindset, establish a global community of data and analytics experts, and build an integrated ecosystem to ensure data is developed as an asset for current and future needs. Yasmeen has supported multiple organizations across industries in their execution of key transformation objectives, including the pivot to analytics, as-a-service, subscription and cloud. She was named one of the Top 50 Data Leaders and Influencers by Information Age, as well as Data Scientist of the Year by Computing Magazine.
I have been showing complete, industry-ready solutions built with RAD Server. For instance, the Field Services Industry template contains REST endpoints which the Field Service Admin and Field Service App connect to. It uses InterBase on the backend for its database storage.
Or the Hospitality Industry template that includes a mobile client application for collecting survey data, a back-end server to store data and administer surveys, and a web client for viewing survey data. Includes RAD Server multi-tenancy support.
To easily deploy your solutions, Embarcadero Technologies provides ready-to-use installers to deploy RAD Server on Linux and Windows servers.
How can I install easily RAD Server on Windows and Linux?
Be sure to head over and check out the RAD Server Windows & Linux installers on the GetIt portal and download them in the IDE!
It can be challenging to develop a neural network predictive model for a new dataset.
One approach is to first inspect the dataset and develop ideas for what models might work, then explore the learning dynamics of simple models on the dataset, then finally develop and tune a model for the dataset with a robust test harness.
This process can be used to develop effective neural network models for classification and regression predictive modeling problems.
In this tutorial, you will discover how to develop a Multilayer Perceptron neural network model for the cancer survival binary classification dataset.
After completing this tutorial, you will know:
How to load and summarize the cancer survival dataset and use the results to suggest data preparations and model configurations to use.
How to explore the learning dynamics of simple MLP models on the dataset.
How to develop robust estimates of model performance, tune model performance and make predictions on new data.
Let’s get started.
Develop a Neural Network for Cancer Survival Dataset Photo by Bernd Thaller, some rights reserved.
Tutorial Overview
This tutorial is divided into 4 parts; they are:
Haberman Breast Cancer Survival Dataset
Neural Network Learning Dynamics
Robust Model Evaluation
Final Model and Make Predictions
Haberman Breast Cancer Survival Dataset
The first step is to define and explore the dataset.
We will be working with the “haberman” standard binary classification dataset.
The dataset describes breast cancer patient data and the outcome is patient survival. Specifically whether the patient survived for five years or longer, or whether the patient did not survive.
This is a standard dataset used in the study of imbalanced classification. According to the dataset description, the operations were conducted between 1958 and 1970 at the University of Chicago’s Billings Hospital.
There are 306 examples in the dataset, and there are 3 input variables; they are:
The age of the patient at the time of the operation.
The two-digit year of the operation.
The number of “positive axillary nodes” detected, a measure of whether cancer has spread.
As such, we have no control over the selection of cases that make up the dataset or features to use in those cases, other than what is available in the dataset.
Although the dataset describes breast cancer patient survival, given the small dataset size and the fact the data is based on breast cancer diagnosis and operations many decades ago, any models built on this dataset are not expected to generalize.
Note: to be crystal clear, we are NOT “solving breast cancer“. We are exploring a standard classification dataset.
Below is a sample of the first 5 rows of the dataset
We can load the dataset as a pandas DataFrame directly from the URL; for example:
# load the haberman dataset and summarize the shape
from pandas import read_csv
# define the location of the dataset
url = 'https://raw.githubusercontent.com/jbrownlee/Datasets/master/haberman.csv'
# load the dataset
df = read_csv(url, header=None)
# summarize shape
print(df.shape)
Running the example loads the dataset directly from the URL and reports the shape of the dataset.
In this case, we can confirm that the dataset has 4 variables (3 input and one output) and that the dataset has 306 rows of data.
This is not many rows of data for a neural network and suggests that a small network, perhaps with regularization, would be appropriate.
It also suggests that using k-fold cross-validation would be a good idea given that it will give a more reliable estimate of model performance than a train/test split and because a single model will fit in seconds instead of hours or days with the largest datasets.
(306, 4)
Next, we can learn more about the dataset by looking at summary statistics and a plot of the data.
# show summary statistics and plots of the haberman dataset
from pandas import read_csv
from matplotlib import pyplot
# define the location of the dataset
url = 'https://raw.githubusercontent.com/jbrownlee/Datasets/master/haberman.csv'
# load the dataset
df = read_csv(url, header=None)
# show summary statistics
print(df.describe())
# plot histograms
df.hist()
pyplot.show()
Running the example first loads the data before and then prints summary statistics for each variable.
We can see that values vary with different means and standard deviations, perhaps some normalization or standardization would be required prior to modeling.
A histogram plot is then created for each variable.
We can see that perhaps the first variable has a Gaussian-like distribution and the next two input variables may have an exponential distribution.
We may have some benefit in using a power transform on each variable in order to make the probability distribution less skewed which will likely improve model performance.
Histograms of the Haberman Breast Cancer Survival Classification Dataset
We can see some skew in the distribution of examples between the two classes, meaning that the classification problem is not balanced. It is imbalanced.
It may be helpful to know how imbalanced the dataset actually is.
We can use the Counter object to count the number of examples in each class, then use those counts to summarize the distribution.
The complete example is listed below.
# summarize the class ratio of the haberman dataset
from pandas import read_csv
from collections import Counter
# define the location of the dataset
url = 'https://raw.githubusercontent.com/jbrownlee/Datasets/master/haberman.csv'
# define the dataset column names
columns = ['age', 'year', 'nodes', 'class']
# load the csv file as a data frame
dataframe = read_csv(url, header=None, names=columns)
# summarize the class distribution
target = dataframe['class'].values
counter = Counter(target)
for k,v in counter.items():
per = v / len(target) * 100
print('Class=%d, Count=%d, Percentage=%.3f%%' % (k, v, per))
Running the example summarizes the class distribution for the dataset.
We can see that class 1 for survival has the most examples at 225, or about 74 percent of the dataset. We can see class 2 for non-survival has fewer examples at 81, or about 26 percent of the dataset.
The class distribution is skewed, but it is not severely imbalanced.
This is helpful because if we use classification accuracy, then any model that achieves an accuracy less than about 73.5% does not have skill on this dataset.
Now that we are familiar with the dataset, let’s explore how we might develop a neural network model.
Neural Network Learning Dynamics
We will develop a Multilayer Perceptron (MLP) model for the dataset using TensorFlow.
We cannot know what model architecture of learning hyperparameters would be good or best for this dataset, so we must experiment and discover what works well.
Given that the dataset is small, a small batch size is probably a good idea, e.g. 16 or 32 rows. Using the Adam version of stochastic gradient descent is a good idea when getting started as it will automatically adapt the learning rate and works well on most datasets.
Before we evaluate models in earnest, it is a good idea to review the learning dynamics and tune the model architecture and learning configuration until we have stable learning dynamics, then look at getting the most out of the model.
We can do this by using a simple train/test split of the data and review plots of the learning curves. This will help us see if we are over-learning or under-learning; then we can adapt the configuration accordingly.
First, we must ensure all input variables are floating-point values and encode the target label as integer values 0 and 1.
...
# ensure all data are floating point values
X = X.astype('float32')
# encode strings to integer
y = LabelEncoder().fit_transform(y)
Next, we can split the dataset into input and output variables, then into 67/33 train and test sets.
We must ensure that the split is stratified by the class ensuring that the train and test sets have the same distribution of class labels as the main dataset.
...
# split into input and output columns
X, y = df.values[:, :-1], df.values[:, -1]
# split into train and test datasets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5, stratify=y, random_state=3)
We can define a minimal MLP model. In this case, we will use one hidden layer with 10 nodes and one output layer (chosen arbitrarily). We will use the ReLU activation function in the hidden layer and the “he_normal” weight initialization, as together, they are a good practice.
The output of the model is a sigmoid activation for binary classification and we will minimize binary cross-entropy loss.
...
# determine the number of input features
n_features = X.shape[1]
# define model
model = Sequential()
model.add(Dense(10, activation='relu', kernel_initializer='he_normal', input_shape=(n_features,)))
model.add(Dense(1, activation='sigmoid'))
# compile the model
model.compile(optimizer='adam', loss='binary_crossentropy')
We will fit the model for 200 training epochs (chosen arbitrarily) with a batch size of 16 because it is a small dataset.
We are fitting the model on raw data, which we think might be a good idea, but it is an important starting point.
...
# fit the model
history = model.fit(X_train, y_train, epochs=200, batch_size=16, verbose=0, validation_data=(X_test,y_test))
At the end of training, we will evaluate the model’s performance on the test dataset and report performance as the classification accuracy.
...
# predict test set
yhat = model.predict_classes(X_test)
# evaluate predictions
score = accuracy_score(y_test, yhat)
print('Accuracy: %.3f' % score)
Finally, we will plot learning curves of the cross-entropy loss on the train and test sets during training.
Tying this all together, the complete example of evaluating our first MLP on the cancer survival dataset is listed below.
# fit a simple mlp model on the haberman and review learning curves
from pandas import read_csv
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
from sklearn.metrics import accuracy_score
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense
from matplotlib import pyplot
# load the dataset
path = 'https://raw.githubusercontent.com/jbrownlee/Datasets/master/haberman.csv'
df = read_csv(path, header=None)
# split into input and output columns
X, y = df.values[:, :-1], df.values[:, -1]
# ensure all data are floating point values
X = X.astype('float32')
# encode strings to integer
y = LabelEncoder().fit_transform(y)
# split into train and test datasets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5, stratify=y, random_state=3)
# determine the number of input features
n_features = X.shape[1]
# define model
model = Sequential()
model.add(Dense(10, activation='relu', kernel_initializer='he_normal', input_shape=(n_features,)))
model.add(Dense(1, activation='sigmoid'))
# compile the model
model.compile(optimizer='adam', loss='binary_crossentropy')
# fit the model
history = model.fit(X_train, y_train, epochs=200, batch_size=16, verbose=0, validation_data=(X_test,y_test))
# predict test set
yhat = model.predict_classes(X_test)
# evaluate predictions
score = accuracy_score(y_test, yhat)
print('Accuracy: %.3f' % score)
# plot learning curves
pyplot.title('Learning Curves')
pyplot.xlabel('Epoch')
pyplot.ylabel('Cross Entropy')
pyplot.plot(history.history['loss'], label='train')
pyplot.plot(history.history['val_loss'], label='val')
pyplot.legend()
pyplot.show()
Running the example first fits the model on the training dataset, then reports the classification accuracy on the test dataset.
Kick-start your project with my new book Data Preparation for Machine Learning, including step-by-step tutorials and the Python source code files for all examples.
In this case we can see that the model performs better than a no-skill model, given that the accuracy is above about 73.5%.
Accuracy: 0.765
Line plots of the loss on the train and test sets are then created.
We can see that the model quickly finds a good fit on the dataset and does not appear to be over or underfitting.
Learning Curves of Simple Multilayer Perceptron on Cancer Survival Dataset
Now that we have some idea of the learning dynamics for a simple MLP model on the dataset, we can look at developing a more robust evaluation of model performance on the dataset.
Robust Model Evaluation
The k-fold cross-validation procedure can provide a more reliable estimate of MLP performance, although it can be very slow.
This is because k models must be fit and evaluated. This is not a problem when the dataset size is small, such as the cancer survival dataset.
We can use the StratifiedKFold class and enumerate each fold manually, fit the model, evaluate it, and then report the mean of the evaluation scores at the end of the procedure.
...
# prepare cross validation
kfold = KFold(10)
# enumerate splits
scores = list()
for train_ix, test_ix in kfold.split(X, y):
# fit and evaluate the model...
...
...
# summarize all scores
print('Mean Accuracy: %.3f (%.3f)' % (mean(scores), std(scores)))
We can use this framework to develop a reliable estimate of MLP model performance with our base configuration, and even with a range of different data preparations, model architectures, and learning configurations.
It is important that we first developed an understanding of the learning dynamics of the model on the dataset in the previous section before using k-fold cross-validation to estimate the performance. If we started to tune the model directly, we might get good results, but if not, we might have no idea of why, e.g. that the model was over or under fitting.
If we make large changes to the model again, it is a good idea to go back and confirm that the model is converging appropriately.
The complete example of this framework to evaluate the base MLP model from the previous section is listed below.
# k-fold cross-validation of base model for the haberman dataset
from numpy import mean
from numpy import std
from pandas import read_csv
from sklearn.model_selection import StratifiedKFold
from sklearn.preprocessing import LabelEncoder
from sklearn.metrics import accuracy_score
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense
from matplotlib import pyplot
# load the dataset
path = 'https://raw.githubusercontent.com/jbrownlee/Datasets/master/haberman.csv'
df = read_csv(path, header=None)
# split into input and output columns
X, y = df.values[:, :-1], df.values[:, -1]
# ensure all data are floating point values
X = X.astype('float32')
# encode strings to integer
y = LabelEncoder().fit_transform(y)
# prepare cross validation
kfold = StratifiedKFold(10, random_state=1)
# enumerate splits
scores = list()
for train_ix, test_ix in kfold.split(X, y):
# split data
X_train, X_test, y_train, y_test = X[train_ix], X[test_ix], y[train_ix], y[test_ix]
# determine the number of input features
n_features = X.shape[1]
# define model
model = Sequential()
model.add(Dense(10, activation='relu', kernel_initializer='he_normal', input_shape=(n_features,)))
model.add(Dense(1, activation='sigmoid'))
# compile the model
model.compile(optimizer='adam', loss='binary_crossentropy')
# fit the model
model.fit(X_train, y_train, epochs=200, batch_size=16, verbose=0)
# predict test set
yhat = model.predict_classes(X_test)
# evaluate predictions
score = accuracy_score(y_test, yhat)
print('>%.3f' % score)
scores.append(score)
# summarize all scores
print('Mean Accuracy: %.3f (%.3f)' % (mean(scores), std(scores)))
Running the example reports the model performance each iteration of the evaluation procedure and reports the mean and standard deviation of classification accuracy at the end of the run.
Kick-start your project with my new book Data Preparation for Machine Learning, including step-by-step tutorials and the Python source code files for all examples.
In this case, we can see that the MLP model achieved a mean accuracy of about 75.2 percent, which is pretty close to our rough estimate in the previous section.
This confirms our expectation that the base model configuration may work better than a naive model for this dataset
In fact, this is a challenging classification problem and achieving a score above about 74.5% is good.
Next, let’s look at how we might fit a final model and use it to make predictions.
Final Model and Make Predictions
Once we choose a model configuration, we can train a final model on all available data and use it to make predictions on new data.
In this case, we will use the model with dropout and a small batch size as our final model.
We can prepare the data and fit the model as before, although on the entire dataset instead of a training subset of the dataset.
...
# split into input and output columns
X, y = df.values[:, :-1], df.values[:, -1]
# ensure all data are floating point values
X = X.astype('float32')
# encode strings to integer
le = LabelEncoder()
y = le.fit_transform(y)
# determine the number of input features
n_features = X.shape[1]
# define model
model = Sequential()
model.add(Dense(10, activation='relu', kernel_initializer='he_normal', input_shape=(n_features,)))
model.add(Dense(1, activation='sigmoid'))
# compile the model
model.compile(optimizer='adam', loss='binary_crossentropy')
We can then use this model to make predictions on new data.
First, we can define a row of new data.
...
# define a row of new data
row = [30,64,1]
Note: I took this row from the first row of the dataset and the expected label is a ‘1’.
We can then make a prediction.
...
# make prediction
yhat = model.predict_classes([row])
Then invert the transform on the prediction, so we can use or interpret the result in the correct label (which is just an integer for this dataset).
...
# invert transform to get label for class
yhat = le.inverse_transform(yhat)
And in this case, we will simply report the prediction.
Tying this all together, the complete example of fitting a final model for the haberman dataset and using it to make a prediction on new data is listed below.
# fit a final model and make predictions on new data for the haberman dataset
from pandas import read_csv
from sklearn.preprocessing import LabelEncoder
from sklearn.metrics import accuracy_score
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Dropout
# load the dataset
path = 'https://raw.githubusercontent.com/jbrownlee/Datasets/master/haberman.csv'
df = read_csv(path, header=None)
# split into input and output columns
X, y = df.values[:, :-1], df.values[:, -1]
# ensure all data are floating point values
X = X.astype('float32')
# encode strings to integer
le = LabelEncoder()
y = le.fit_transform(y)
# determine the number of input features
n_features = X.shape[1]
# define model
model = Sequential()
model.add(Dense(10, activation='relu', kernel_initializer='he_normal', input_shape=(n_features,)))
model.add(Dense(1, activation='sigmoid'))
# compile the model
model.compile(optimizer='adam', loss='binary_crossentropy')
# fit the model
model.fit(X, y, epochs=200, batch_size=16, verbose=0)
# define a row of new data
row = [30,64,1]
# make prediction
yhat = model.predict_classes([row])
# invert transform to get label for class
yhat = le.inverse_transform(yhat)
# report prediction
print('Predicted: %s' % (yhat[0]))
Running the example fits the model on the entire dataset and makes a prediction for a single row of new data.
Kick-start your project with my new book Data Preparation for Machine Learning, including step-by-step tutorials and the Python source code files for all examples.
In this case, we can see that the model predicted a “1” label for the input row.
Predicted: 1
Further Reading
This section provides more resources on the topic if you are looking to go deeper.
As a small business owner, it is understandable that you want your website to be as successful as possible. The website is a way to attract new customers and retain the loyalty of existing ones. Keeping this in mind, it is clear that ensuring the speed of your website is essential. GreenGeeks is a popular web hosting provider used by many businesses around the world. Does it have the speed that you need? Let’s find out!
About GreenGeeks
The creator of GreenGeeks, Trey Gardner, began the company because he realized how few eco-friendly web hosting companies existed. In addition to this, the industry was notorious for the amount of energy it consumed. He wanted to create a web hosting company that helped the environment rather than hinder it.
GreenGeeks started back in 2008 and has been steadily growing ever since then. GreenGeeks is currently hosting over 500,000 websites. They are 300% energy efficient, and their hosting clients are carbon-reducing. This is in contrast to the majority of web hosting companies in the world.
Speed
The first thing you will want to know about is the provider’s speed. Customers of GreenGeeks have tested the speed based on different areas around the world. The speeds relate to milliseconds per loading page on a website. For this particular test, the website was being hosted from Canada. The results were as follows:
· Western United States: 66ms
· Eastern United States: 19ms
· London: 191ms
· Singapore: 455ms
· Sao Paulo: 151ms
· Bangalore: 324ms
· Sydney: 262ms
· Japan: 214ms
· Canada: 9ms
· Germany: 95ms
As a worldwide average, the speed of the website was 178.60ms.
The worldwide average for this website was 118.60ms.
As a general rule of thumb, website owners should ensure a speed of at least 2,000ms per loading page. If they do not accomplish this, they run the risk of losing client attention. If a website has speeds of over 2,000ms, its bounce rate is over four times that of quicker sites. The bounce rate is referring to visitors who open the webpage but navigate away before it loads. This lowers the SEO rating of the site and causes it to fall below competitors. GreenGeeks’ loading times are well below the minimum speed, even when tested in a variety of different countries.
Data Centers
The number of data centers that a web hosting provider has is a factor in determining their speed. The location where these centers are will also affect the speed. If you have data centers spread out in a lot of different locations, your clients in those areas will experience faster speeds. GreenGeeks has data centers in the following cities:
· Montreal, Canada
· Toronto, Canada
· Phoenix, United States
· Chicago, United States
· Amsterdam, Netherlands
Having data centers near their customers is one reason why GreenGeeks excels at providing quick speeds. People will be able to work more efficiently and have fewer connectivity problems if there is a data center relatively close to them. GreenGeeks has committed to always keeping its data centers updated with the newest technology and hardware.
Optimization
Another way that GreenGeeks improves speed is through the various optimizations that they offer. Having a hosting plan with the company includes the use of the following perks:
Free CDN: Powered by Cloudflare, this service reduces the amount of time it takes for clients to receive website data. This occurs by caching the content and saving a copy of the static web pages.
SSD Hard Drives: GreenGeeks always tries to stay in tune with the changes in technology. Several years ago, they upgraded all their clients to servers that use SSD (solid-state drives). Client website files and databases get stored on these drives, which improves the speed and reliability of service.
PHP 7: The company was very quick to enable PHP 7 on all its servers. PHP is a programing language, and 7 is the latest update. PHP code gets inserted into HTML code, improving the information interface. With this increased performance, website speeds will correspondingly improve.
The Verdict?
GreenGeeks is certainly a strong contender for fast speeds. This web hosting provider offers customers very quick loading times, as demonstrated by the speed test above. In addition to this, their various measures to improve speed continually contribute to their success. This includes having data centers spread out around the world and working on new optimizations to increase their customers’ speed.
Patrick Gaydecki started programming in 1987. He has a showcase entry (Vsound 2.7) in the Delphi 26th Showcase Challenge and we got to interview him about his programming experiences. Visit the Vsound website to get more information.
When did you start using RAD Studio/Delphi and have long have you been using it?
I started using Borland Turbo Pascal back in 1987. In 1996 I migrated to Borland Pascal for Windows, then Delphi when it first appeared in 1995. I am currently using the latest version, Embarcadero RAD Studio 10.4 (Sydney).
What was it like building software before you had RAD Studio/Delphi?
In my company we have always developed programs with have a graphical user interface. Before RAD studio, we had to develop everything from a low-level, even down to components such as buttons, dialog boxes and of course graphing/charting components. A simple program shell would take days to craft, rather than minutes.
How did RAD Studio/Delphi help you create your showcase application?
Our customers are musicians – mainly players of electric violins. Vsound includes the user interface and the hardware – called a pedal – that modifies the sound produced by an electric violin (or an acoustic violin fitted with a pickup), producing an output that matches the timbre and voice of a high quality acoustic violin. Our customers need to be able to quickly and easily adjust the parameters of the system to produce the sound that they want, so an intuitive GUI is critical. Delphi has exactly the tools for the job.
What made RAD Studio/Delphi stand out from other options?
In a word, speed. It is so simple to create attractive, powerful applications. The compiler is also remarkably efficient, generating stand-alone executables in seconds, for both Mac and Windows.
What made you happiest about working with RAD Studio/Delphi?
Many things, but in particular the speed with which we can create attractive visuals that do the job they are supposed to do. The software is also doing quite a lot of calculations – Fast Fourier transforms for example, and Delphi is very fast at this.
What have you been able to achieve through using RAD Studio/Delphi to create your showcase application?
A user system that has high visual impact that is also robust and fault tolerant. Using both the VCL and Firemonkey, we have developed platforms for Windows and Mac OSX with a minimum of code conversion.
What are some future plans for your showcase application?
Wait and see! We have a range of new products in the pipeline, with an amazing new app to support our latest hardware. As ever, we will be focusing on both form and function. RAD Studio allows us a lot of flexibility in this regard.
Thank you, Patrick! Check out his showcase entry through the link below.
In this tutorial, we'll take a look at how to check if a string starts with a substring in JavaScript.
This is easily achieved either through the startsWith() method, or regular expressions.
Check if String Starts with Another String with startsWith()
The startsWith(searchString[, position]) method returns a boolean which indicates whether a string begins with the characters of a specified searchString. Optionally we can also use the position argument to specify the position of the string at which to begin searching.
Let's see this in action:
const str = "This is an example for startsWith() method";
console.log(str.startsWith("This")); // true
console.log(str.startsWith("is", 2)); // true
In the first example, we are checking if the string str starts with "This".
In the second example, we are checking if str starts with "is", if we are starting our search from index 2 (i.e, 3rd character).
Check if String Starts with Another String with Regular Expressions
Regular Expressions are really powerful, and allow us to match various patterns. This is a great use-case for them, since we're essentially checking for a pattern - if a string starts with a substring.
The regexObj.test(reg) method tries to match the specified regular expression reg to the original string and returns a boolean value which indicates if a match was found:
In this approach, we are checking whether the pattern regEx occurs in the string str. The ^ metacharacter represents that the specified pattern he must be at the start of a line. Thus, the regular expression - /^he/ checks if the specified line starts with the substring he.
Conclusion
In this tutorial, we've taken a look at how to check if a string starts with a substring in vanilla JavaScript, using the startsWith() method, as well as Regular Expressions.
pdflayer is an API used by developers for seamless automated conversion of high-quality HTML to PDF on any platform (websites, applications).
The lightweight RESTful API enables developers to generate highly customizable PDFs from URLs and HTML. Additionally, the platform offers a robust and sturdy infrastructure with simple and straightforward integration.
Here’s a catch!
The architecture of pdflayer is built using the combination of various powerful PDF rendering engines. This makes the platform most productive, reliable, and cost-effective for developers to process a large number of documents in a shorter span of time.
What makes pdflayer discernible from other APIs?
A complete series of customization tools, including document settings, a variety of layout settings, security and protection, interface and branding tweaks, and many more are included in the pdflayer API.
Moreover, the API offers high throughput, with its infrastructure efficient enough to process thousands of requests at a time.
Not restricted with certain limitations, the pdflayer API is compatible with all programming languages. Users merely need to request using the URL structure, and the API will do the residue.
pdflayer Features
High-Quality PDF Conversion
Customized PDFs can be produced with a GET or POST from any URL or brand HTML within seconds.
Robust PDF Engine
pdflayer combines several powerful PDF engines based on browsers running stalwart operating systems.
Powerful CDN
The API uses lightning-fast CDN to store PDF documents that can be retrieved in milliseconds.
Tracking Statistics
Users can track their API statistics and usage every month. Also, the API reminds users with notifications if they are running low.
Bounteous Customization
pdflayer offers full customization as to whatever works for browsers will also work for the API, including HTML, CSS, XML, SVG, JavaScript, margins, headers, footers, page numbers, watermark support, and many more
pdflayer Pricing
The platform offers a free plan for users to get started. However, for professional and enterprise requirements, there are different plans available:
Basic: $9.99 per month/ $95.90 per year
Professional: $39.99 per month/ $383.90 per year
Enterprise: $119.99 per month/ $1151.90 per year
Now, let’s get started with how to use pdflayer API in the Android application.
How to use pdflayer API in your Android application?
API Access Key and Authentication
After registering, each user receives an API access key, a unique password for requesting the pdflayer API. A base endpoint URL is available where users need to attach the API access key for authenticating pdflayer API.
The key features of the pdflayer API are set up for use by HTTP POST. The pdflayer API can also handle GET requests using its simple URL structure for clients who wish to make API requests through HTTP GET.
Getting Started with pdflayer API
Here are the three simple steps for building an API request:
Step 1 | Base URL
Every API request is based on the following URL:
http://api.pdflayer.com/api/convert
Step 2 | Parameters Requirements
Now, authenticate your access key by inserting a URL with the document_url parameter or supplying raw HTML code with the document_html parameter and appending your access key.
Step 3 | Optional Parameters
To fully customize and configure PDFs, developers can make use of optional parameters. Here are some of them:
For a complete list of functionalities and parameters, click here.
API Request Example
This API Request uses some of the below-mentioned optional parameters for converting an HTML document into a PDF.
Before transferring URL to any API parameters, it is advised to URL encode URL. However, if the respective URL contains a special character, like ‘&,’ URL encoding is necessary.
It shows how the above-given URL has been passed into an API Request.
API Error Codes
If the above query fails to run, the pdflayer API will return “success”: false and state the three-digit error code. Also, it will display an internal error type and a piece of text information, suggesting users how to correct the error.
Consider the example of an error triggered with no URL specified:
Document Configuration
The pdflayer API-created PDF documents are called ‘pdflayer.pdf’ by default. You can define a custom name of your final PDF document using the document_name parameter of the API.
Here are the rate limits of Requests to the API based on subscription plans:
Conclusion
The pdflayer API is programmed to automatically translate HTML into PDF easily and efficiently in any application or web app. The API is highly convenient to use even for a non-technical person. Users merely have to authenticate the pdflayer API by appending the access key to the base endpoint URL. The API will do the rest.
pdflayer is the most trusted and authoritative HTML to PDF conversion with lightweight RESTful architecture. The platform offers high flexibility and customizable options to developers. Additionally, the API can be implemented with any programming language because of its high compatibility.
pdflayer generates around one hundred PDFs monthly for free. If your requirements are high, you may opt for any of the subscription plans mentioned above.
Just recently there have been some great webinars and posts on how to modernize your applications. We’ve gathered together a collection of the most recent ones which focus on Microsoft’s gorgeous Fluent Design System.
First: What Is The Fluent Design System And How Can It Help Modernize Your Applications?
Visitors to the Desktop First Conference were able to hear directly from Microsoft Engineer Matteo Pagani. In this video Matteo describes Fluent UI in particular from Microsoft’s perspective and how it can help add that superb look and really modernize your applications.
Next two videos from Embarcadero’s own Delphi MVP, self-confessed ‘design geek’ Ian Barker – this is part one:
…and this is the follow-up, part two:
We have a lot more coming in the next few weeks specifically on this topic; and there’s never been a better time to learn about the different techniques and the great tools RAD Studio provides to modernize your applications!
When did you start using RAD Studio/Delphi and have long have you been using it?
At that time it was owned by Borland and it was Delphi 3. I was working for a Research Institute in Ivory Coast (West Africa). The purpose was to develop an application to manage our research protocols and results. We were working in association with a Research Institute in Montpelier, France. I discovered the great potential of that development tool and since then, each time I have to develop a program, I use it if possible.
What was it like building software before you had RAD Studio/Delphi?
Before we start using Delphi, we were spending a lot of time writing codes for both the functionalities and the design.
How did RAD Studio/Delphi help you create your showcase application?
My application is mainly developed using C++ Builder (Rad Studio 10.2.3). But I am using some third party tools, mainly related to biometrics features, which work only with Delphi. So I had to develop a Datasnap server to be able to call biometrics procedures and functions, developed with Delphi in my C++ Builder application. So, in this case interoperability between C++ Builder and Delphi was crucial!
What made RAD Studio/Delphi stand out from other options?
I prefer to use RAD Studio because visual components make the development of applications easier. My application eLynceus is built with C++ Builder and also connects to a DataSnap server written with Delphi. It is also easier to connect to databases and manipulate their data with components like the ClientDataSet.
What made you happiest about working with RAD Studio/Delphi?
1 – Visual components. The fact that they are easy to use and also be able to build my own components. 2 – The Objects Inspector 3 – Database components like ClientDataSet TQuery, TIBQuery, etc… 4 – The possibility to build applications for different platforms: MacOs, Android, Linux from the same source code 5 – The documentation. I learn the software by myself
What have you been able to achieve through using RAD Studio/Delphi to create your showcase application?
Using Rad Studio, we were able to build eLynceus which is a protection tool and also a web application which uses facial recognition to identify wanted (dangerous criminals and also kidnapping and missing) persons. The application features can be divided in 4 categories: 1 – Search in main criminal databases (FBI USA, RCMP Canada, etc…) using textual search with multiple criteria 2 – Use facial recognition to search in criminal databases by downloading a picture or taking a snapshot 3 – An automatic Facial identification which is a protection tool. Used at home with a webcam or an IP camera, it has the same functionalities as a home security system plus the possibility to identify dangerous criminals. Used from a mobile device, it can provide vital information (locations and pictures) for criminal investigations. 4 – eLynceus has also social media features. Users can find and stay in touch with people they know and they have lost contact with.
What are some future plans for your showcase application?
Our web application requires a lot of computation resources. Which means that Windows server is not the right OS for deployment. We are waiting impatiently for Rad Studio 10.5 with the possibility to build applications for Linux 64 bits! This is a must have!
Thank you, Arsene! Click the link below to look into his showcase entry.
Today, we are excited to announce that Microsoft Defender for Endpoint support of Windows 10 on Arm devices is generally available. This expanded support is part of our continued efforts to extend Microsoft Defender for Endpoint capabilities across all the endpoints defenders need to secure.
Arm technology is enabling the digital transformation with innovative new form factors, better connectivity and mobile possibilities, instant-on technology, and amazing battery life. These elements also empower organizations to support the shift to remote and fluid work environments – a shift that requires a security-first mindset. As we continue to move forward in a new hybrid work environment, security needs to be an integral part of that change. Microsoft is committed to empowering defenders in their daily efforts to protect their organizations’ data and employees. This commitment is deeply ingrained in our DNA and reflected in the product investments that we make.
Microsoft’s investment in Windows 10 on Arm offers powerful, highly-mobile experiences, with security at the core. These devices are designed to take full advantage of the built-in protections available in Windows 10 such as encryption, data protection, and next gen antivirus and antimalware capabilities. Microsoft Defender for Endpoint compliments these security features with an industry leading, unified, cloud powered enterprise endpoint security platform that helps security teams prevent, detect, investigate and respond to advanced threats, while delivering secure and productive end user security experiences.
Security teams will find that there are no changes to the experience with regards to Arm based PCs. All the data, insights, and functionality in Microsoft Defender for Endpoint is exactly the same as its always been including things like device inventory, alerts, response actions, advanced hunting, and more, including the onboarding experience.
As always, many of our feature and capability enhancements and investments are driven by customer feedback. We thank our customers for their continued journey with us.
Microsoft Defender for Endpoint is an industry leading, cloud powered endpoint security solution offering vulnerability management, endpoint protection, endpoint detection and response, and mobile threat defense. With our solution, threats are no match. If you’re not yet taking advantage of Microsoft’s unrivaled threat optics and proven capabilities, sign up for a free Microsoft Defender for Endpoint trial today.
This visualization sample demonstrates the use of the TMapView class. We will show how to display and interact with the map, including:
Changing between two tabs that display different maps.
Showing the coordinates of the map center.
Zooming in and zooming out both maps.
Location Visualization
You can find the Tabbed Map sample project at:
Start | Programs | Embarcadero RAD Studio Sydney | Samples and navigate to:
Object PascalMulti-Device SamplesDevice Sensors and ServicesMaps
CPPMulti-Device SamplesDevice Sensors and ServicesMaps
Subversion Repository:
You can find Delphi and C++ code samples in GitHub Repositories. Search by name into the samples repositories according to your RAD Studio version.
Visualization with Google Maps on Android
If you are running this sample on Android, in order to access the Google Maps servers, you have to add a Maps API key to the sample. To acquire the API key and add it in the sample you need to follow these configuration steps:
Configuring the sample project options. Once you have the Maps API Key; in RAD Studio:
Go to Project > Options > Version Info
Select Android platform as Target (either in Debug, Release or All Configurations).
Add the Maps API Key value in the apiKey key, and click OK.
How Do We Use the Sample?
Navigate to one of the locations given above and open:
Delphi: TabbedMapProject.dproj.
C++: TabMapProject.cbproj.
If you are running the sample on Android, ensure you first follow the steps indicated in Using Google Maps on Android
Before you run the sample, ensure the device is connected to the Internet.
Press F9 or choose Run > Run.
When you run the sample, the TMapView loads the map.
To interact with the map:
Use the Saint-Pétersbourg and San Francisco tabs to change between the two maps.
Change the zoom using the Zoom out and Zoom in buttons.
Move the map and see the coordinates of the map center in the CameraInfo TLabel, at the button of the app.
Files
File in Delphi
File in C++
Contains
TabbedMapProject.dproj
TabMapProject.cbproj
The project itself.
TabbedMap.fmx
TabbedMap.fmx
The main form where the components are located.
TabbedMap.pas
TabbedMap.h, TabbedMap.cpp
Implementation of the sample.
Maps Visualization Implementation
The sample uses TMapView to display and manage the maps.
TMapCoordinate is used to create the initial coordinates with the indicated latitude and longitude. Then, the center of the maps are set to such coordinates with the TMapView.Location property.
The TMapView.Zoom property is used to set the initial zoom of both maps to 10. This same property, is also used to zoom in and zoom out both maps by adding or subtracting 1 to the Zoom property.
Cybersecurity has been in the news far more often in the past 12 months than in previous years, as cybercriminals escalated their activity during the COVID-19 pandemic quarantine. The seismic shift of hundreds of millions of people connecting and working from home every day presented cybercriminals with greater opportunities to attack and new threat vectors to exploit, as was detailed in the Microsoft 2020 Digital Defense Report.
Cybercrime is a large and flourishing enterprise, unfortunately. Like in any business, innovation fuels success and profit.
Business email compromise is on the rise
Even the oldest tricks of cybercriminals are constantly evolving in techniques to bring more revenue from nefarious customers. Email phishing—when individuals or organizations receive a fraudulent email encouraging them to click on a link, giving the cybercriminal access to a device or personal information—has become a dominant vector to attack enterprise digital estates. Known as business email compromise (BEC), cybercriminals have responded to technical advancements in detection by developing fast-moving phishing scams that can victimize even the savviest professionals.
BEC criminals know that email is today’s de facto method of communication. People have been encouraged to “go paperless” by companies, and most feel confident they can spot a spam email. But they also inherently trust those they work with and are more likely to respond to requests from their company’s executives, as well as their trusted suppliers and business partners. A real but compromised account anywhere in the communication stream can lead to disastrous results.
Cybercriminals bank, quite literally, on these human, socially reinforced patterns. And it’s not surprising that cybercriminals succeed with schemes that appear, at least in retrospect, unbelievably primitive and transparent. In fact, one quite well-known BEC scam that used keylogger malware to fine-tune email access—and operated without detection for six months in 2015—redirected invoice payments totaling $75 million to cybercriminal bank accounts. In hindsight, one might expect that someone would notice, given the vast amount of money involved. But no one did.
As severe as the consequences of BEC can be, they are unfortunately also quite frequent. Since 2009, 17 percent of the cyber incidents reported to Chubb have stemmed from social engineering. And the risk is only increasing—the scale and threat of email phishing attacks are growing.
Take action: Reduce email phishing attacks with MFA
Enabling multi-factor authentication (MFA) can be one of the quickest and most impactful ways to protect user identities, and an effective means to reduce the threat and potential impact of BEC. MFA has been available for all Microsoft Office 365 users since 2014, yet many small- to mid-sized business system administrators have not enabled it for their users.
In a joint white paper co-written by Microsoft and Chubb, the world’s largest publicly traded insurance provider, we explain how multi-factor authentication foils fraud, and how implementing MFA may be much easier and painless for your users than you may think. It’s a simple yet effective means to reduce the threat and potential impact of BEC.
Embrace Zero Trust to protect your complex digital estate
Beyond the benefits of multi-factor authentication, the move toward Zero Trust security can enable and secure your remote workforce, increase the speed of threat detection and remediation, mitigate the impact of potential breaches, and make it harder for cybercriminals to make money.
The business of cybercrime will continue to grow. However, by increasing the complexity and cost of perpetrating that crime, businesses can disincentivize the criminals to the point where they move on toward easier targets.
Learn more
To learn more about email phishing and how to protect your organization, read these blogs:
To learn more about Microsoft Security solutions visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.
Erst kürzlich gab es einige großartige Webinare und Beiträge zur Modernisierung Ihrer Anwendungen. Wir haben eine Sammlung der neuesten zusammengestellt, die sich auf das wunderschöne Fluent Design System von Microsoft konzentrieren.
Erstens: Was ist das Fluent Design System und wie kann es zur Modernisierung Ihrer Anwendungen beitragen?
Besucher der Desktop First Conference konnten direkt von Microsoft Engineer Matteo Pagani hören . In diesem Video beschreibt Matteo Fluent UI insbesondere aus der Sicht von Microsoft und wie es dazu beitragen kann, dieses hervorragende Erscheinungsbild hinzuzufügen und Ihre Anwendungen wirklich zu modernisieren.
Die nächsten beiden Videos von Embarcaderos eigenem Delphi MVP, dem selbstbekannten „Design-Geek“ Ian Barker – dies ist Teil eins:
… Und dies ist das Follow-up, Teil zwei:
Wir werden in den nächsten Wochen noch viel mehr speziell zu diesem Thema haben. und es gab noch nie einen besseren Zeitpunkt, um mehr über die verschiedenen Techniken und die großartigen Tools zu erfahren, die RAD Studio zur Modernisierung Ihrer Anwendungen bietet!
Recientemente, se han realizado excelentes seminarios web y publicaciones sobre cómo modernizar sus aplicaciones. Hemos reunido una colección de los más recientes que se centran en el magnífico Fluent Design System de Microsoft.
Primero: ¿Qué es el sistema de diseño fluido y cómo puede ayudar a modernizar sus aplicaciones?
Los visitantes de la Desktop First Conference pudieron escuchar directamente al ingeniero de Microsoft Matteo Pagani . En este video, Matteo describe Fluent UI en particular desde la perspectiva de Microsoft y cómo puede ayudar a agregar ese aspecto soberbio y realmente modernizar sus aplicaciones.
A continuación, dos videos del propio MVP de Delphi de Embarcadero, el confeso ‘geek del diseño’ Ian Barker : esta es la primera parte:
… y este es el seguimiento, segunda parte:
Tenemos mucho más en las próximas semanas específicamente sobre este tema; ¡Y nunca ha habido un mejor momento para aprender sobre las diferentes técnicas y las excelentes herramientas que RAD Studio proporciona para modernizar sus aplicaciones!
Recentemente, houve ótimos webinars e postagens sobre como modernizar seus aplicativos. Reunimos uma coleção dos mais recentes que se concentram no lindo Fluent Design System da Microsoft.
Primeiro: O que é o Fluent Design System e como ele pode ajudar a modernizar seus aplicativos?
Os visitantes da Desktop First Conference puderam ouvir diretamente o engenheiro da Microsoft Matteo Pagani . Neste vídeo, Matteo descreve o Fluent UI em particular da perspectiva da Microsoft e como ele pode ajudar a adicionar aquele visual excelente e realmente modernizar seus aplicativos.
Próximos dois vídeos do Delphi MVP da Embarcadero, o confesso ‘geek de design’ Ian Barker – esta é a primeira parte:
… e este é o acompanhamento, parte dois:
Temos muito mais coisas chegando nas próximas semanas, especificamente sobre este tópico; e nunca houve melhor momento para aprender sobre as diferentes técnicas e as excelentes ferramentas que o RAD Studio oferece para modernizar seus aplicativos!
Совсем недавно было проведено несколько отличных вебинаров и сообщений о том, как модернизировать ваши приложения. Мы собрали коллекцию самых последних, посвященных великолепной системе Fluent Design от Microsoft.
Во-первых: что такое система Fluent Design и как она может помочь модернизировать ваши приложения?
Посетители конференции Desktop First могли напрямую услышать мнение инженера Microsoft Маттео Пагани . В этом видео Маттео описывает Fluent UI, в частности, с точки зрения Microsoft, и то, как он может помочь добавить превосходный внешний вид и действительно модернизировать ваши приложения.
Следующие два видео от собственного MVP Delphi от Embarcadero, самопровозглашенного «гика дизайна» Иана Баркера — это часть первая:
… И это продолжение, часть вторая:
В ближайшие несколько недель у нас будет еще много новостей, посвященных этой теме; и сейчас самое время узнать о различных методах и великолепных инструментах, которые RAD Studio предоставляет для модернизации ваших приложений!
Java is a type-safe programming language. Type safety ensures a layer of validity and robustness in a programming language. It is a key part of Java's security to ensure that operations done on an object are only performed if the type of the object supports it.
Type safety dramatically reduces the number of programming errors that might occur during runtime, involving all kinds of errors linked to type mismatches. Instead, these types of errors are caught during compile-time which is much better than catching errors during runtime, allowing developers to have less unexpected and unplanned trips to the good old debugger.
Type safety is also interchangeably called strong typing.
Java Generics is a solution designed to reinforce the type safety that Java was designed to have. Generics allow types to be parameterized onto methods and classes and introduces a new layer of abstraction for formal parameters. This will be explained in detail later on.
There are many advantages of using generics in Java. Implementing generics into your code can greatly improve its overall quality by preventing unprecedented runtime errors involving data types and typecasting.
This guide will demonstrate the declaration, implementation, use-cases, and benefits of generics in Java.
Why Use Generics?
To provide context as to how generics reinforce strong typing and prevents runtime errors involving typecasting, let's take a look at a code snippet.
Let's say you want to store a bunch of String variables in a list. Coding this without using generics would look like this:
List stringList = new ArrayList();
stringList.add("Apple");
This code won't trigger any compile-time errors but most IDEs will warn you that the List that you've initialized is of a raw type and should be parameterized with a generic.
IDE-s warn you of problems that can occur if you don't parameterize a list with a type. One is being able to add elements of any data type to the list. Lists will, by default, accept any Object type, which includes every single one of its subtypes:
List stringList = new ArrayList();
stringList.add("Apple");
stringList.add(1);
Adding two or more different types within the same collection violates the rules of type safety. This code will successfully compile but this definitely will cause a multitude of problems.
For example, what happens if we try to loop through the list? Let's use an enhanced for loop:
for (String string : stringList) {
System.out.println(string);
}
We'll be greeted with a:
Main.java:9: error: incompatible types: Object cannot be converted to String
for (String string : stringList) {
In fact, this isn't because we've put a String and Integer together. If we changed the example around and added two Strings:
List stringList = new ArrayList();
stringList.add("Apple");
stringList.add("Orange");
for (String string : stringList) {
System.out.println(string);
}
We'd still be greeted with:
Main.java:9: error: incompatible types: Object cannot be converted to String
for (String string : stringList) {
This is because without any parametrization, the List only deals with Objects. You can technically circumvent this by using an Object in the enhanced for-loop:
List stringList = new ArrayList();
stringList.add("Apple");
stringList.add(1);
for (Object object : stringList) {
System.out.println(object);
}
Which would print out:
Apple
1
However, this is very much against intuition and isn't a real fix. This is just avoiding the underlying design problem in an unsustainable way.
Another problem is the need to typecast whenever you access and assign elements within a list without generics. To assign new reference variables to the elements of the list, we must typecast them, since the get() method returns Objects:
String str = (String) stringList.get(0);
Integer num = (Integer) stringList.get(1);
In this case, how will you be able to determine the type of each element during runtime, so you know which type to cast it to? There aren't many options and the ones at your disposal complicate things way out of proportion, like using try/catch blocks to try and cast elements into some predefined types.
Also, if you fail to cast the list element during assignment, it will display an error like this:
Type mismatch: cannot convert from Object to Integer
In OOP, explicit casting should be avoided as much as possible because it isn't a reliable solution for OOP-related problems.
Lastly, because the List class is a subtype of Collection, it should have access to iterators using the Iterator object, the iterator() method, and for-each loops. If a collection is declared without generics, then you definitely won't be able to use any of these iterators, in a reasonable manner.
This is why Java Generics came to be, and why they're an integral part of the Java ecosystem. Let's take a look at how to declare generic classes, and rewrite this example to utilize generics and avoid the issues we've just seen.
Generic Classes and Objects
Let's declare a class with a generic type. To specify a parameter type on a class or an object, we use the angle bracket symbols <> beside its name and assign a type for it inside the brackets. The syntax of declaring a generic class looks like this:
public class Thing<T> {
private T val;
public Thing(T val) { this.val = val;}
public T getVal() { return this.val; }
public <T> void printVal(T val) {
System.out.println("Generic Type" + val.getClass().getName());
}
}
Note: Generic types can NOT be assigned primitive data types such as int, char, long, double, or float. If you want to assign these data types, then use their wrapper classes instead.
The letter T inside the angle brackets is called a type parameter. By convention, type parameters are single lettered (A-Z) and uppercase. Some other common type parameter names used are K (Key), V (Value), E (Element), and N (Number).
Although you can, in theory, assign any variable name to a type parameter that follows Java's variable conventions, it is with good reason to follow the typical type parameter convention to differentiate a normal variable from a type parameter.
The val is of a generic type. It can be a String, an Integer, or another object. Given the generic class Thing declared above, let's instantiate the class as a few different objects, of different types:
public void callThing() {
// Three implementations of the generic class Thing with 3 different data types
Thing<Integer> thing1 = new Thing<>(1);
Thing<String> thing2 = new Thing<>("String thing");
Thing<Double> thing3 = new Thing<>(3.5);
System.out.println(thing1.getVal() + " " + thing2.getVal() + " " + thing3.getVal());
}
Notice how we're not specifying the parameter type before the constructor calls. Java infers the type of the object during initialization so you won't need to retype it during the initialization. In this case, the type is already inferred from the variable declaration. This behavior is called type inference. If we inherited this class, in a class such as SubThing, we also wouldn't need to explicitly set the type when instantiating it as a Thing, since it'd infer the type from its parent class.
You can specify it in both places, but it's just redundant:
Thing<Integer> thing1 = new Thing<Integer>(1);
Thing<String> thing2 = new Thing<String>("String thing");
Thing<Double> thing3 = new Thing<Double>(3.5);
If we run the code, it'll result in:
1 String thing 3.5
Using generics allows type-safe abstraction without having to use typecasting which is much riskier in the long run.
In a similar vein, the List constructor accepts a generic type:
public interface List<E> extends Collection<E> {
// ...
}
In our previous examples, we haven't specified a type, resulting in the List being a List of Objects. Now, let's rewrite the example from before:
List<String> stringList = new ArrayList<>();
stringList.add("Apple");
stringList.add("Orange");
for (String string : stringList) {
System.out.println(string);
}
This results in:
Apple
Orange
Works like a charm! Again, we don't need to specify the type in the ArrayList() call, since it infers the type from the List<String> definition. The only case in which you'll have to specify the type after the constructor call is if you're taking advantage of the local variable type inference feature of Java 10+:
var stringList = new ArrayList<String>();
stringList.add("Apple");
stringList.add("Orange");
This time around, since we're using the var keyword, which isn't type-safe itself, the ArrayList<>() call can't infer the type, and it'll simply default to an Object type if we don't specify it ourselves.
Generic Methods
Java supports method declarations with generic parameters and return types. Generic methods are declared exactly like normal methods but have the angle brackets notation before the return type.
Let's declare a simple generic method that accepts 3 parameters, appends them in a list, and return it:
public static <E> List<E> zipTogether(E element1, E element2, E element3) {
List<E> list = new ArrayList<>();
list.addAll(Arrays.asList(element1, element2, element3));
return list;
}
Multiple types of parameters are also supported for objects and methods. If a method uses more than one type parameter, you can provide a list of all of them inside the diamond operator and separate each parameter using commas:
// Methods with void return types are also compatible with generic methods
public static <T, K, V> void printValues(T val1, K val2, V val3) {
System.out.println(val1 + " " + val2 + " " + val3);
}
Here, you can get creative with what you pass in. Following the conventions, we'll pass in a type, key and value:
printValues(new Thing("Employee"), 125, "David");
Which results in:
Thing{val=Employee} 125 David
Though, keep in mind that generic type parameters, that can be inferred, don't need to be declared in the generic declaration before the return type. To demonstrate, let's create another method that accepts 2 variables - a generic Map and a List that can exclusively containString values:
Here, the K and V generic types are mapped to the Map<K, V> since they're inferred types. On the other hand, since the List<String> can only accept strings, there's no need to add the generic type to the <K, V> list.
We've now covered generic classes, objects, and methods with one or more type parameters. What if we want to limit the extent of abstraction that a type parameter has? This limitation can be implemented using parameter binding.
Bounded Type Parameters
Parameter Binding allows the type parameter to be limited to an object and its subclasses. This allows you to enforce certain classes and their subtypes, while still having the flexibility and abstraction of using generic type parameters.
To specify that a type parameter is bounded, we simply use the extends keyword on the type parameter - <N extends Number>. This makes sure that the type parameter N we supply to a class or method is of type Number.
Let's declare a class, called InvoiceDetail, which accepts a type parameter, and make sure that that type parameter is of type Number. This way, the generic types we can use while instantiating the class are limited to numbers and floating-point decimals, as Number is the superclass of all classes involving integers, including the wrapper classes and primitive data types:
class InvoiceDetail<N extends Number> {
private String invoiceName;
private N amount;
private N discount;
// Getters, setters, constructors...
}
Here, extends can mean two things - extends, in the case of classes, and implements in the case of interfaces. Since Number is an abstract class, it's used in the context of extending that class.
By extending the type parameter N as a Number subclass, the instantiation of amount and discount are now limited to Number and its subtypes. Trying to set them to any other type will trigger a compile-time error.
Let's try to erroneously assign String values, instead of a Number type:
InvoiceDetail<String> invoice = new InvoiceDetail<>("Invoice Name", "50.99", ".10");
Since String isn't a subtype of Number, the compiler catches that and triggers an error:
Bound mismatch: The type String is not a valid substitute for the bounded parameter <N extends Number> of the type InvoiceDetail<N>
This is a great example of how using generics enforces type-safety.
Additionally, a single type parameter can extend multiple classes and interfaces by using the & operator for the subsequently extended classes:
public class SampleClass<E extends T1 & T2 & T3> {
// ...
}
It's also worth noting that another great usage of bounded type parameters is in method declarations. For example, if you want to enforce that the types passed into a method conform to some interfaces, you can make sure that the type parameters extend a certain interface.
A classic example of this is enforcing that two types are Comparable, if you're comparing them in a method such as:
public static <T extends Comparable<T>> int compare(T t1, T t2) {
return t1.compareTo(t2);
}
Here, using generics, we enforce that t1 and t2 are both Comparable, and that they can genuinely be compared with the compareTo() method. Knowing that Strings are comparable, and override the compareTo() method, we can comfortably use them here:
System.out.println(compare("John", "Doe"));
The code results in:
6
However, if we tried using a non-Comparable type, such as Thing, which doesn't implement the Comparable interface:
System.out.println(compare(new Thing<String>("John"), new Thing<String>("Doe")));
Other than the IDE marking this line as erroneous, if we try running this code, it'll result in:
java: method compare in class Main cannot be applied to given types;
required: T,T
found: Thing<java.lang.String>,Thing<java.lang.String>
reason: inference variable T has incompatible bounds
lower bounds: java.lang.Comparable<T>
lower bounds: Thing<java.lang.String>
In this case, since Comparable is an interface, the extends keyword actually enforces that the interface is implemented by T, not extended.
Wildcards in Generics
Wildcards are used to symbolize any class type, and are denoted by ?. In general, you'll want to use wildcards when you have potential incompatibilities between different instantiations of a generic type. There are three types of wildcards: upper-bounded, lower-bounded and unbounded.
Choosing which approach you'll use is usually determined by the IN-OUT principle. The IN-OUT principle defines In-variables and Out-variables, which, in simpler terms, represent if a variable is used to provide data, or to serve in its output.
For example, a sendEmail(String body, String recipient) method has an In-variablebody and Out-variablerecipient. The body variable provides data on the body of the email you'd like to send, while the recipient variable provides the email address you'd like to send it to.
There are also mixed variables, which are used to both provide data, and then reference the result itself, in which case, you'll want to avoid using wildcards.
Generally speaking, you'll want to define In-variables with upper bounded wildcards, using the extends keyword and Out-variables with lower bounded wildcards, using the super keyword.
For In-variables that can be accessed through method of an object, you should prefer unbounded wildcards.
Upper-Bounded Wildcards
Upper-bound wildcards are used to provide a generic type that limits a variable to a class or an interface and all its subtypes. The name, upper-bounded refers to the fact that you bound the variable to an upper type - and all of it's subtypes.
In a sense, upper-bounded variables are more relaxed than lower-bounded variables, since they allow for more types. They're declared using the wildcard operator ? followed by the keyword extends and the supertype class or interface (the upper bound of their type):
<? extends SomeObject>
Here, extends, again, means extends classes and implements interfaces.
To recap, upper-bounded wildcards are typically used for objects that provide input to be consumed in-variables.
Note: There's a distinct difference between Class<Generic> and Class<? extends Generic>. The former allows only the Generic type to be used. In the latter, all subtypes of Generic are also valid.
Let's make an upper-type (Employee) and its subclass (Developer):
public abstract class Employee {
private int id;
private String name;
// Constructor, getters, setters
}
And:
public class Developer extends Employee {
private List<String> skillStack;
// Constructor, getters and setters
@Override
public String toString() {
return "Developer {" +
"\nskillStack=" + skillStack +
"\nname=" + super.getName() +
"\nid=" + super.getId() +
"\n}";
}
}
Now, let's make a simple printInfo() method, that accepts an upper-bounded list of Employee objects:
public static void printInfo(List<? extends Employee> employeeList) {
for (Employee e : employeeList) {
System.out.println(e.toString());
}
}
The List of employees we supply is upper-bounded to Employee, which means we can chuck in any Employee instance, as well as its subclasses, such as Developer:
List<Developer> devList = new ArrayList<>();
devList.add(new Developer(15, "David", new ArrayList<String>(List.of("Java", "Spring"))));
devList.add(new Developer(25, "Rayven", new ArrayList<String>(List.of("Java", "Spring"))));
printInfo(devList);
Lower-bounded wildcards are the opposite of upper-bounded. This allows a generic type to be restricted to a class or interface and all its supertypes. Here, the class or interface is the lower bound:
Declaring lower-bounded wildcards follows the same pattern as upper-bounded wildcards - a wildcard (?) followed by super and the supertype:
<? super SomeObject>
Based on the IN-OUT principle, lower-bounded wildcards are used for objects that are involved in the output of data. These objects are called out variables.
Let's revisit the email functionality from before and make a hierarchy of classes:
public class Email {
private String email;
// Constructor, getters, setters, toString()
}
Now, let's make a subclass for Email:
public class ValidEmail extends Email {
// Constructor, getters, setters
}
We'll also want to have some utility class, such as MailSender to "send" emails and notify us of the results:
public class MailSender {
public String sendMail(String body, Object recipient) {
return "Email sent to: " + recipient.toString();
}
}
Finally, let's write a method that accepts a body and recipients list and sends them the body, notifying us of the result:
public static String sendMail(String body, List<? super ValidEmail> recipients) {
MailSender mailSender = new MailSender();
StringBuilder sb = new StringBuilder();
for (Object o : recipients) {
String result = mailSender.sendMail(body, o);
sb.append(result+"\n");
}
return sb.toString();
}
Here, we've used a lower-bounded generic type of ValidEmail, which extends Email. So, we're free to create Email instances, and chuck them into this method:
List<Email> recipients = new ArrayList<>(List.of(
new Email("david.landup@mail.com"),
new Email("rayven.esplanada@mail.com")));
String result = sendMail("Hello World!", recipients);
System.out.println(result);
This results in:
Email sent to: Email{email='david.landup@mail.com'}
Email sent to: Email{email='rayven.esplanada@mail.com'}
Unbounded Wildcards
Unbounded wildcards are wildcards without any form of binding. Simply put, they are wildcards that extend every single class starting from the base Object class.
Unbounded wildcards are used when the Object class is the one being accessed or manipulated or if the method it's being used on does not access or manipulate using a type parameter. Otherwise, using unbounded wildcards will compromise the type safety of the method.
To declare an unbounded wildcard, simply use the question mark operator encapsulated within angle brackets <?>.
For example, we can have a List of any element:
public void print(List<?> elements) {
for(Object element : elements) {
System.out.println(element);
}
}
System.out.println() accepts any object, so we're good to go here. If the method were to copy an existing list into a new list, then upper-bounded wildcards are more favorable.
Difference Between Bounded Wildcards and Bounded Type Parameters?
You may have noticed the sections for bounded wildcards and bounded type parameters are separated but more or less have the same definition, and on the surface level look like they're interchangeable:
<E extends Number>
<? extends Number>
So, what's the difference between these two approaches? There are several differences, in fact:
Bounded type parameters accept multiple extends using the & keyword while bounded wildcards only accept one single type to extend.
Bounded type parameters are only limited to upper-bounds. This means that you cannot use the super keyword on bounded type parameters.
Bounded wildcards can only be used during instantiation. They can not be used for declaration (e.g. class declarations and constructor calls. A few examples of invalid use of wildcards are:
class Example<? extends Object> {...}
GenericObj<?> = new GenericObj<?>()
GenericObj<? extends Object> = new GenericObj<? extends Object>()
Bounded wildcards should not be used as return types. This will not trigger any errors or exceptions but it forces unnecessary handling and typecasting which is completely against the type safety that generics achieves.
The operator ? can not be used as an actual parameter and can only be used as a generic parameter. For example:
public <?> void printDisplay(? var) {} will fail during compilation, while
public <E> void printDisplay(E var) compiles and runs successfully.
Benefits of Using Generics
Throughout the guide, we've covered the primary benefit of generics - to provide an additional layer of type safety for your program. Apart from that, generics offer many other benefits over code that doesn't use them.
Runtime errors involving types and casting are caught during compile time. The reason why typecasting should be avoided is that the compiler does not recognize casting exceptions during compile time. When used correctly, generics completely avoids the use of typecasting and subsequently avoids all the runtime exceptions that it might trigger.
Classes and methods are more reusable. With generics, classes and methods can be reused by different types without having to override methods or create a separate class.
Conclusion
Applying generics to your code will significantly improve code reusability, readability, and more importantly, type safety. In this guide, we've gone into what generics are, how you can apply them, the differences between approaches and when to choose which.
Some prediction problems require predicting both numeric values and a class label for the same input.
A simple approach is to develop both regression and classification predictive models on the same data and use the models sequentially.
An alternative and often more effective approach is to develop a single neural network model that can predict both a numeric and class label value from the same input. This is called a multi-output model and can be relatively easy to develop and evaluate using modern deep learning libraries such as Keras and TensorFlow.
In this tutorial, you will discover how to develop a neural network for combined regression and classification predictions.
After completing this tutorial, you will know:
Some prediction problems require predicting both numeric and class label values for each input example.
How to develop separate regression and classification models for problems that require multiple outputs.
How to develop and evaluate a neural network model capable of making simultaneous regression and classification predictions.
Let’s get started.
Develop Neural Network for Combined Classification and Regression Photo by Sang Trinh, some rights reserved.
Tutorial Overview
This tutorial is divided into three parts; they are:
Single Model for Regression and Classification
Separate Regression and Classification Models
Abalone Dataset
Regression Model
Classification Model
Combined Regression and Classification Models
Single Model for Regression and Classification
It is common to develop a deep learning neural network model for a regression or classification problem, but on some predictive modeling tasks, we may want to develop a single model that can make both regression and classification predictions.
Regression refers to predictive modeling problems that involve predicting a numeric value given an input.
Classification refers to predictive modeling problems that involve predicting a class label or probability of class labels for a given input.
For more on the difference between classification and regression, see the tutorial:
There may be some problems where we want to predict both a numerical value and a classification value.
One approach to solving this problem is to develop a separate model for each prediction that is required.
The problem with this approach is that the predictions made by the separate models may diverge.
An alternate approach that can be used when using neural network models is to develop a single model capable of making separate predictions for a numeric and class output for the same input.
This is called a multi-output neural network model.
The benefit of this type of model is that we have a single model to develop and maintain instead of two models and that training and updating the model on both output types at the same time may offer more consistency in the predictions between the two output types.
We will develop a multi-output neural network model capable of making regression and classification predictions at the same time.
First, let’s select a dataset where this requirement makes sense and start by developing separate models for both regression and classification predictions.
Separate Regression and Classification Models
In this section, we will start by selecting a real dataset where we may want regression and classification predictions at the same time, then develop separate models for each type of prediction.
Abalone Dataset
We will use the “abalone” dataset.
Determining the age of an abalone is a time-consuming task and it is desirable to determine the age from physical details alone.
This is a dataset that describes the physical details of abalone and requires predicting the number of rings of the abalone, which is a proxy for the age of the creature.
We can use the data as the basis for developing separate regression and classification Multilayer Perceptron (MLP) neural network models.
Note: we are not trying to develop an optimal model for this dataset; instead we are demonstrating a specific technique: developing a model that can make both regression and classification predictions.
Regression Model
In this section, we will develop a regression MLP model for the abalone dataset.
First, we must separate the columns into input and output elements and drop the first column that contains string values.
We will also force all loaded columns to have a float type (expected by neural network models) and record the number of input features, which will need to be known by the model later.
...
# split into input (X) and output (y) variables
X, y = dataset[:, 1:-1], dataset[:, -1]
X, y = X.astype('float'), y.astype('float')
n_features = X.shape[1]
Next, we can split the dataset into a train and test dataset.
We will use a 67% random sample to train the model and the remaining 33% to evaluate the model.
...
# split data into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=1)
We can then define an MLP neural network model.
The model will have two hidden layers, the first with 20 nodes and the second with 10 nodes, both using ReLU activation and “he normal” weight initialization (a good practice). The number of layers and nodes were chosen arbitrarily.
The output layer will have a single node for predicting a numeric value and a linear activation function.
...
# define the keras model
model = Sequential()
model.add(Dense(20, input_dim=n_features, activation='relu', kernel_initializer='he_normal'))
model.add(Dense(10, activation='relu', kernel_initializer='he_normal'))
model.add(Dense(1, activation='linear'))
The model will be trained to minimize the mean squared error (MSE) loss function using the effective Adam version of stochastic gradient descent.
...
# compile the keras model
model.compile(loss='mse', optimizer='adam')
We will train the model for 150 epochs with a mini-batch size of 32 samples, again chosen arbitrarily.
...
# fit the keras model on the dataset
model.fit(X_train, y_train, epochs=150, batch_size=32, verbose=2)
Finally, after the model is trained, we will evaluate it on the holdout test dataset and report the mean absolute error (MAE).
...
# evaluate on test set
yhat = model.predict(X_test)
error = mean_absolute_error(y_test, yhat)
print('MAE: %.3f' % error)
Tying this all together, the complete example of an MLP neural network for the abalone dataset framed as a regression problem is listed below.
# regression mlp model for the abalone dataset
from pandas import read_csv
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from sklearn.metrics import mean_absolute_error
from sklearn.model_selection import train_test_split
# load dataset
url = 'https://raw.githubusercontent.com/jbrownlee/Datasets/master/abalone.csv'
dataframe = read_csv(url, header=None)
dataset = dataframe.values
# split into input (X) and output (y) variables
X, y = dataset[:, 1:-1], dataset[:, -1]
X, y = X.astype('float'), y.astype('float')
n_features = X.shape[1]
# split data into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=1)
# define the keras model
model = Sequential()
model.add(Dense(20, input_dim=n_features, activation='relu', kernel_initializer='he_normal'))
model.add(Dense(10, activation='relu', kernel_initializer='he_normal'))
model.add(Dense(1, activation='linear'))
# compile the keras model
model.compile(loss='mse', optimizer='adam')
# fit the keras model on the dataset
model.fit(X_train, y_train, epochs=150, batch_size=32, verbose=2)
# evaluate on test set
yhat = model.predict(X_test)
error = mean_absolute_error(y_test, yhat)
print('MAE: %.3f' % error)
Running the example will prepare the dataset, fit the model, and report an estimate of model error.
Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.
In this case, we can see that the model achieved an error of about 1.5 (rings).
We can also record the total number of classes as the total number of unique encoded class values, which will be needed by the model later.
...
# encode strings to integer
y = LabelEncoder().fit_transform(y)
n_class = len(unique(y))
After splitting the data into train and test sets as before, we can define the model and change the number of outputs from the model to equal the number of classes and use the softmax activation function, common for multi-class classification.
...
# define the keras model
model = Sequential()
model.add(Dense(20, input_dim=n_features, activation='relu', kernel_initializer='he_normal'))
model.add(Dense(10, activation='relu', kernel_initializer='he_normal'))
model.add(Dense(n_class, activation='softmax'))
Given we have encoded class labels as integer values, we can fit the model by minimizing the sparse categorical cross-entropy loss function, appropriate for multi-class classification tasks with integer encoded class labels.
...
# compile the keras model
model.compile(loss='sparse_categorical_crossentropy', optimizer='adam')
After the model is fit on the training dataset as before, we can evaluate the performance of the model by calculating the classification accuracy on the hold-out test set.
...
# evaluate on test set
yhat = model.predict(X_test)
yhat = argmax(yhat, axis=-1).astype('int')
acc = accuracy_score(y_test, yhat)
print('Accuracy: %.3f' % acc)
Tying this all together, the complete example of an MLP neural network for the abalone dataset framed as a classification problem is listed below.
# classification mlp model for the abalone dataset
from numpy import unique
from numpy import argmax
from pandas import read_csv
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
# load dataset
url = 'https://raw.githubusercontent.com/jbrownlee/Datasets/master/abalone.csv'
dataframe = read_csv(url, header=None)
dataset = dataframe.values
# split into input (X) and output (y) variables
X, y = dataset[:, 1:-1], dataset[:, -1]
X, y = X.astype('float'), y.astype('float')
n_features = X.shape[1]
# encode strings to integer
y = LabelEncoder().fit_transform(y)
n_class = len(unique(y))
# split data into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=1)
# define the keras model
model = Sequential()
model.add(Dense(20, input_dim=n_features, activation='relu', kernel_initializer='he_normal'))
model.add(Dense(10, activation='relu', kernel_initializer='he_normal'))
model.add(Dense(n_class, activation='softmax'))
# compile the keras model
model.compile(loss='sparse_categorical_crossentropy', optimizer='adam')
# fit the keras model on the dataset
model.fit(X_train, y_train, epochs=150, batch_size=32, verbose=2)
# evaluate on test set
yhat = model.predict(X_test)
yhat = argmax(yhat, axis=-1).astype('int')
acc = accuracy_score(y_test, yhat)
print('Accuracy: %.3f' % acc)
Running the example will prepare the dataset, fit the model, and report an estimate of model error.
Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.
In this case, we can see that the model achieved an accuracy of about 27%.
We can prepare the dataset as we did before for classification, although we should save the encoded target variable with a separate name to differentiate it from the raw target variable values.
We can then split the input, raw output, and encoded output variables into train and test sets.
...
# split data into train and test sets
X_train, X_test, y_train, y_test, y_train_class, y_test_class = train_test_split(X, y, y_class, test_size=0.33, random_state=1)
Next, we can define the model using the functional API.
The model takes the same number of inputs as before with the standalone models and uses two hidden layers configured in the same way.
We can then define the model with a single input layer and two output layers.
...
# define model
model = Model(inputs=visible, outputs=[out_reg, out_clas])
Given the two output layers, we can compile the model with two loss functions, mean squared error loss for the first (regression) output layer and sparse categorical cross-entropy for the second (classification) output layer.
...
# compile the keras model
model.compile(loss=['mse','sparse_categorical_crossentropy'], optimizer='adam')
We can also create a plot of the model for reference.
This requires that pydot and pygraphviz are installed. If this is a problem, you can comment out this line and the import statement for the plot_model() function.
...
# plot graph of model
plot_model(model, to_file='model.png', show_shapes=True)
Each time the model makes a prediction, it will predict two values.
Similarly, when training the model, it will need one target variable per sample for each output.
As such, we can train the model, carefully providing both the regression target and classification target data to each output of the model.
...
# fit the keras model on the dataset
model.fit(X_train, [y_train,y_train_class], epochs=150, batch_size=32, verbose=2)
The fit model can then make a regression and classification prediction for each example in the hold-out test set.
...
# make predictions on test set
yhat1, yhat2 = model.predict(X_test)
The first array can be used to evaluate the regression predictions via mean absolute error.
...
# calculate error for regression model
error = mean_absolute_error(y_test, yhat1)
print('MAE: %.3f' % error)
The second array can be used to evaluate the classification predictions via classification accuracy.
...
# evaluate accuracy for classification model
yhat2 = argmax(yhat2, axis=-1).astype('int')
acc = accuracy_score(y_test_class, yhat2)
print('Accuracy: %.3f' % acc)
And that’s it.
Tying this together, the complete example of training and evaluating a multi-output model for combiner regression and classification predictions on the abalone dataset is listed below.
# mlp for combined regression and classification predictions on the abalone dataset
from numpy import unique
from numpy import argmax
from pandas import read_csv
from sklearn.metrics import mean_absolute_error
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input
from tensorflow.keras.layers import Dense
from tensorflow.keras.utils import plot_model
# load dataset
url = 'https://raw.githubusercontent.com/jbrownlee/Datasets/master/abalone.csv'
dataframe = read_csv(url, header=None)
dataset = dataframe.values
# split into input (X) and output (y) variables
X, y = dataset[:, 1:-1], dataset[:, -1]
X, y = X.astype('float'), y.astype('float')
n_features = X.shape[1]
# encode strings to integer
y_class = LabelEncoder().fit_transform(y)
n_class = len(unique(y_class))
# split data into train and test sets
X_train, X_test, y_train, y_test, y_train_class, y_test_class = train_test_split(X, y, y_class, test_size=0.33, random_state=1)
# input
visible = Input(shape=(n_features,))
hidden1 = Dense(20, activation='relu', kernel_initializer='he_normal')(visible)
hidden2 = Dense(10, activation='relu', kernel_initializer='he_normal')(hidden1)
# regression output
out_reg = Dense(1, activation='linear')(hidden2)
# classification output
out_clas = Dense(n_class, activation='softmax')(hidden2)
# define model
model = Model(inputs=visible, outputs=[out_reg, out_clas])
# compile the keras model
model.compile(loss=['mse','sparse_categorical_crossentropy'], optimizer='adam')
# plot graph of model
plot_model(model, to_file='model.png', show_shapes=True)
# fit the keras model on the dataset
model.fit(X_train, [y_train,y_train_class], epochs=150, batch_size=32, verbose=2)
# make predictions on test set
yhat1, yhat2 = model.predict(X_test)
# calculate error for regression model
error = mean_absolute_error(y_test, yhat1)
print('MAE: %.3f' % error)
# evaluate accuracy for classification model
yhat2 = argmax(yhat2, axis=-1).astype('int')
acc = accuracy_score(y_test_class, yhat2)
print('Accuracy: %.3f' % acc)
Running the example will prepare the dataset, fit the model, and report an estimate of model error.
Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.
A plot of the multi-output model is created, clearly showing the regression (left) and classification (right) output layers connected to the second hidden layer of the model.
Plot of the Multi-Output Model for Combine Regression and Classification Predictions
In this case, we can see that the model achieved both a reasonable error of about 1.495 (rings) and a similar accuracy as before of about 25.6%.
In this article, will look at an interesting algorithm related to Graph Theory: Hierholzer’s Algorithm. We will discuss a problem and solve it using this Algorithm with examples. We will also discuss the approach and analyze the complexities for the solution.
Hierholzer’s Algorithm has its use mainly in finding an Euler Path and Eulerian Circuit in a given Directed or Un-directed Graph. Euler Path (or Euler Trail) is a path of edges that visits all the edges in a graph exactly once. Hence, an Eulerian Circuit (or Cycle) is a Euler Path which starts and ends on the same vertex.
Let us understand this with an example, Consider this Graph :
In the above Directed Graph, assuming we start from the Node 0 , the Euler Path is : 0 -> 1 -> 4 -> 3 -> 1 -> 2 and the Eulerian Circuit is as follows : 0 -> 1 -> 4 -> 3 -> 1 -> 2 -> 3 -> 0. We can see that the Eulerian Circuit starts and ends on the same vertex 0.
Note: We see some nodes being repeated in the Euler Path. It is so because the above graph is directed so we have to find a path along the edges. If the above graph was Un-directed the Path would be : 0 -> 1 -> 2 -> 3 -> 4.
Necessary Conditions for Eulerian Circuit
Now let us look at some conditions which must hold for an Eulerian Graph to exist in a Directed Graph.
Every vertex must have an equal In-degree and Out-degree. In-degree is the number of edges incident on a vertex. Out-degree is the number of outgoing edges from a vertex.
There can be at most one such vertex whose Out-degree – In-degree = 1 and one such vertex whose In-Degree – Out-degree = 1. Hence, if there are more than one such vertex, it means the Eulerian Circuit does not exist for the graph.
Hence, The vertices which follow the second condition can act as the starting and the ending vertices of the Euler Path.
If the In and Out degrees of all vertices are equal to each other. Then any vertex can be our starting node. Generally we choose a vertex with smallest Out-degree or an odd degree vertex..
The In-Degree and Out-Degree of the vertices of the above graph is :
Now let us look at how Hierholzer’s Algorithm is useful in finding Eulerian Circuit for the above graph.
For the above graph, we choose Vertex 0 as starting node, follow a trail of edges from that vertex until returning back to it. It is not possible to get stuck at any vertex, because In-degree and Out-degree of every vertex is same.
If we come again to the start vertex while all the vertices are not visited yet, we backtrack to the nearest node which has a edge to a unvisited node. We will repeat this process and follow the trail along the directed edges until we get to the starting node and then we unwind the stack and we print the nodes.
For each node we visit we will decrement the count of its Outgoing Edges or Out-degree by 1, to ensure that we do not visit the same vertex again unless, there exists a node which has to be visited from that vertex only.
Step-by-Step Example
Let us look at a step by step example how we use this Algorithm for the above example graph.
We start from Node 0 which is our starting node to Node 1. We will decrement the count of the source node’s outgoing edge or Out-degree after every node we visit. So Current Outdegree of Node 0 is 0. Hence, The Eulerian Path looks like :
After this, we do a normal DFS Traversal for every node so we visit the node 4 and decrement Outdegree of Node 1 . So Outdegree of Node 1 is 1. The Path now is :
Now, 4 has an outgoing edge to Node 3, we visit it and update its Out-degree which is now 0. Thus now the Euler path is :
So, now 3 has an outgoing edge to Node 1 again, we visit it and decrement its Outdegree to 1. We do not visit node 0 because it has no pending nodes to be visited. Along with this, we have to maintain the constraint that discussed in Step 3 of Algorithm above. So the path now is :
Now, Node 1 has an edge to yet unvisited node 2, we traverse to it and update it’s Outgoing edge count = 1-1 =0. The updated Path is:
Finally, Node 2 has an edge to node 3, we visit it and update its Outdegree to 0. Thereby completing the Euler Path traversing all the nodes. To complete the cycle or Eulerian Circuit we visit Node 0 from Node 3 which results the original graph.
Note: There was no Back-tracking done in this example, as we visited every node once except the condition when node 2 was to be visited.
Implementation in Java
For the implementation we use a 2D – List in Java (Vector in C++), to store the nodes along with their outgoing edges. We will use a Map (Hash-Map) to store count of outgoing edges for each vertex. The Key being the vertex and the Value will be the Out-degree of the same vertex. We use a Stack to maintain which nodes are processed at any instant. As soon as we get the Out-degree for any node equal to 0 we add it to our result array or list.
Let us look at the implementation code in JAVA:
import java.util.*;
public class Hierholzer_Euler
{
public static void main(String args[])
{
List< List<Integer> > adj = new ArrayList<>();
// Build the Graph
adj.add(new ArrayList<Integer>());
adj.get(0).add(1);
adj.add(new ArrayList<Integer>());
adj.get(1).add(2);
adj.get(1).add(4);
adj.add(new ArrayList<Integer>());
adj.get(2).add(3);
adj.add(new ArrayList<Integer>());
adj.get(3).add(0);
adj.get(3).add(1);
adj.add(new ArrayList<Integer>());
adj.get(4).add(3);
System.out.println("The Eulerian Circuit for the Graph is : ");
printEulerianCircuit(adj);
}
static void printEulerianCircuit(List< List<Integer> > adj)
{
// adj represents the adjacency list of
// the directed graph
// edge represents the number of edges emerging from a vertex
Map<Integer,Integer> edges=new HashMap<Integer,Integer>();
for (int i=0; i<adj.size(); i++)
{
//find the count of edges to keep track of unused edges
edges.put(i,adj.get(i).size());
}
// Maintain a stack to keep vertices
Stack<Integer> curr_path = new Stack<Integer>();
// vector to store final circuit
List<Integer> circuit = new ArrayList<Integer>();
// We start from vertex 0
curr_path.push(0);
// Current vertex
int curr_v = 0;
while (!curr_path.empty())
{
// If there's remaining edge
if (edges.get(curr_v)>0)
{
// Push the vertex visited.
curr_path.push(adj.get(curr_v).get(edges.get(curr_v) - 1));
// and remove that edge or decrement the edge count.
edges.put(curr_v, edges.get(curr_v) - 1);
// Move to next vertex
curr_v = curr_path.peek();
}
// back-track to find remaining circuit
else
{
circuit.add(curr_path.peek());
curr_v = curr_path.pop();
}
}
// After getting the circuit, now print it in reverse
for (int i=circuit.size()-1; i>=0; i--)
{
System.out.print(circuit.get(i));
if(i!=0)
System.out.print(" -> ");
}
}
}
Output:
The Eulerian Circuit for the Graph is :
0 -> 1 -> 4 -> 3 -> 1 -> 2 -> 3 -> 0
Now, let us have a quick look at the complexities of this Algorithm.
Time Complexity: We do a modified DFS traversal, where we traverse at most all the edges in the graph to complete the Eulerian Circuit so the time complexity is O(E), for E edges in the Graph. Unlike Fleury’s Algorithm which takes O(E*E) or O(E2) time, Hierholzer’s Algorithm is more efficient.
Space Complexity: For extra Space, we use a Map and a Stack to keep track of the edges of each node and the nodes processed respectively. So we at the most store all the vertices of the Graph, so the overall complexity is O(V), where V is the number of vertices.
That’s it for the article you can try out this Algorithm with different examples and execute the code for better understanding.
Let us know your suggestions or doubts (if any) in the comments section below.
The Windows Ribbon framework is a magnificent command presentation system that implements a fresh option to the layered menus, toolbars, and task panes of common Windows applications.
This Delphi library allows Delphi developers to utilize the Windows Ribbon Framework in their Delphi applications. This library uses the native Windows Ribbon Framework library to implement the ribbon functionality. It does not emulate the Ribbon user interface as other Delphi component sets do and that’s a good thing.
Windows Ribbon Framework Features
This Delphi library is much more than a simple header translation. It has the following features:
Complete translation of the UI Ribbon header files.
A class library that provides higher-level access to the Ribbon API.
A control for dropping on any existing VCL form that automatically loads the ribbon and maps ribbon commands to equally named VCL Actions.
Delphi-versions of the UI Ribbon Samples from the Windows SDK.
A feature-complete semi-visual Ribbon Designer.
The Ribbon Designer comes with a WordPad template that lets you quickly create a Ribbon that looks virtually identical to the WordPad accessory that comes with earlier versions of Microsoft Windows.
The website www.fmxexpress.com has an article with some really detailed information about the ModernListView Library. Lets check what they are saying.
“Developer rzaripov1990 has a custom ListView component over on Github for Firemonkey in Delphi 10 Berlin. The ListView is the central component for every mobile application, and as a developer you should always choose the one that can be heavily customizable and very easy to use/implement. This modern ListView component is available for Delphi 10 Berlin with FireMonkey on Android, IOS, OSX, and Windows”.
What are the features of the ModernListView Library?
One nice feature is that it has both horizontal and vertical mode. Thus, using
ListView.Horizontal := true
enables the list to display the cells (items) in a horizontal perspective, while
displays the items vertically. If you are an artist when designing your look and feel of the application, this component can customize every graphic aspect such with available events such as: SetColorItemSelected, SetColorItemFill, SetColorBackground, SetColorItemSeparator, SetColorText, SetColorTextSelected, SetColorTextDetail, SetColorHeader, SetColorTextHeader, and many other properties.
The properties are self-explanatory, no need to cover them here. With the AutoColumns and ColumnWidth properties, the component will automatically calculate the best fit appearance and position for the items when populating the list (very useful when dealing with large number of items).
Apart form the standard behavior events, you have OnColumnClick listener for the ListView. You also have the option to hide/show the scroll bars (ListView.ShowScrollBar), set indent for items separators (ListView.SeparatorLeftOffset and ListView.SeparatorRightOffset).
How much does the ModernListView Library cost?
The component is free and has some nice demos with it as well. For the moment it is available only for Delphi Berlin using FireMonkey, very useful too if you build multi-device applications.
Using ModernListView Library
Let’s get a better view of what is this all about. We will now go through some of the components, their design and what they do.
There are many data visualization libraries in Python, yet Matplotlib is the most popular library out of all of them. Matplotlib’s popularity is due to its reliability and utility - it's able to create both simple and complex plots with little code. You can also customize the plots in a variety of ways.
In this tutorial, we'll cover how to plot a Joint Plot in Matplotlib which consists of a Scatter Plot and multiple Distribution Plots on the same Figure.
Joint Plots are used to explore relationships between bivariate data, as well as their distributions at the same time.
Note: This sort of task is much more fit for libraries such as Seaborn, which has a built-in jointplot() function. With Matplotlib, we'll construct a Joint Plot manually, using GridSpec and multiple Axes objects, instead of having Seaborn do it for us.
Importing Data
We'll use the famous Iris Dataset, since we can explore the relationship between features such as SepalWidthCm and SepalLengthCm through a Scatter Plot, but also explore the distributions between the Species feature with their sepal length/width in mind, through Distribution Plots at the same time.
Let's import the dataset and take a peek:
import pandas as pd
df = pd.read_csv('iris.csv')
print(df.head())
We'll be exploring the bivariate relationship between the SepalLengthCm and SepalWidthCm features here, but also their distributions. We can approach this in two ways - with respect to their Species or not.
We can totally disregard the Species feature, and simply plot histograms of the distributions of each flower instance. On the other hand, we can color-code and plot distribution plots of each flower instance, highlighting the difference in their Species as well.
We'll explore both options here, starting with the simpler one - disregarding the Species altogether.
Plot a Joint Plot in Matplotlib with Single-Class Histograms
In the first approach, we'll just load in the flower instances and plot them as-is, with no regard to their Species.
We'll be using a GridSpec to customize our figure's layout, to make space for three different plots and Axes instances.
To invoke the GridSpec constructor, we'll want to import it alongside the PyPlot instance:
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.gridspec import GridSpec
Now, let's create our Figure and create the Axes objects:
We've created 3 Axes instances, by adding subplots to the figure, using our GridSpec instance to position them. This results in a Figure with 3 empty Axes instances:
Now that we've got the layout and positioning in place, all we have to do is plot the data on our Axes. Let's update the script so that we plot the SepalLengthCm and SepalWidthCm features through a Scatter plot, on our ax_scatter axes, and each of these features on the ax_hist_y and ax_hist_x axes:
We've set the orientation of ax_hist_y to horizontal so that it's plotted horizontally, on the right-hand side of the Scatter Plot, in the same orientation we've set our axes to, using the GridSpec:
This results in a Joint Plot of the relationship between the SepalLengthCm and SepalWidthCm features, as well as the distributions for the respective features.
Plot a Joint Plot in Matplotlib with Multiple-Class Histograms
Now, another case we might want to explore is the distribution of these features, with respect to the Species of the flower, since it could very possibly affect the range of sepal lengths and widths.
For this, we won't be using just one histogram for each axis, where each contains all flower instances, but rather, we'll be overlaying a histogram for each Species on both axes.
To do this, we'll first have to dissect the DataFrame we've been using before, by the flower Species:
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.gridspec import GridSpec
df = pd.read_csv('iris.csv')
setosa = df[df['Species']=='Iris-setosa']
virginica = df[df['Species']=='Iris-virginica']
versicolor = df[df['Species']=='Iris-versicolor']
species = df['Species']
colors = {
'Iris-setosa' : 'tab:blue',
'Iris-versicolor' : 'tab:red',
'Iris-virginica' : 'tab:green'
}
Here, we've just filtered out the DataFrame, by the Species feature into three separate datasets. The setosa, virginica and versicolor datasets now contain only their respective instances.
We'll also want to color each of these instances with a different color, based on their Species, both in the Scatter Plot and in the Histograms. For that, we've simply cut out a Series of the Species feature, and made a colors dictionary, which we'll use to map() the Species of each flower to a color later on.
Now, let's make our Figure, GridSpec and Axes instances:
When provided to the c argument of the scatter() function, it applies colors to instances in that order, effectively coloring each instance with a color corresponding to its species.
For the Histograms, we've simply plotted three plots, one for each Species, with their respective colors. You can opt for a step Histogram here, and tweak the alpha value to create different-looking distributions.
Running this code results in:
Now, each Species has its own color and distribution, plotted separately from other flowers. Furthermore, they're color-coded with the Scatter Plot so it's a really intuitive plot that can easily be read and interpreted.
Note: If you find the overlapping colors, such as the orange that comprises of the red and blue Histograms distracting, setting the histtype to step will remove the filled colors:
Conclusion
In this guide, we've taken a look at how to plot a Joint Plot in Matplotlib - a Scatter Plot with accompanying Distribution Plots (Histograms) on both axes of the plot, to explore the distribution of the variables that constitute the Scatter Plot itself.
Although this task is more suited for libraries like Seaborn, which have built-in support for Joint Plots, Matplotlib is the underlying engine that enables Seaborn to make these plots effortlessly.
If you're interested in Data Visualization and don't know where to start, make sure to check out our bundle of books on Data Visualization in Python:
Data Visualization in Python with Matplotlib and Pandas is a book designed to take absolute beginners to Pandas and Matplotlib, with basic Python knowledge, and allow them to build a strong foundation for advanced work with theses libraries - from simple plots to animated 3D plots with interactive buttons.
It serves as an in-depth, guide that'll teach you everything you need to know about Pandas and Matplotlib, including how to construct plot types that aren't built into the library itself.
Data Visualization in Python, a book for beginner to intermediate Python developers, guides you through simple data manipulation with Pandas, cover core plotting libraries like Matplotlib and Seaborn, and show you how to take advantage of declarative and experimental libraries like Altair. More specifically, over the span of 11 chapters this book covers 9 Python libraries: Pandas, Matplotlib, Seaborn, Bokeh, Altair, Plotly, GGPlot, GeoPandas, and VisPy.
It serves as a unique, practical guide to Data Visualization, in a plethora of tools you might use in your career.
In VCL, of course, you can use Real PNG files with the TPngImage class. With TPngImage class, you can load and manipulate PNG graphics. But there is another option to use PNG image files with PNGComponents.
PNGComponents is a set of components that allows you to easily include PNG files in your application. This is a source-only release of TurboPack PNGComponents. It includes design-time and runtime packages for Delphi and C++Builder additionally, the library supports Win32 and Win64.
Why should I use PngComponents?
The library says it best: “The PngComponents library offers a major leap forward in creating nice GUI’s in designtime. Not only does it speed up the implementation of alphablended icons in your application, it eases the way you can use them throughout your software. No longer do you need to put them in a resource file manually and then manually drawing them on a temporary bitmap and assigning that to somewhere. Adding beautiful alphablended icons to your interface is but a few clicks away.“
How can I get the PngComponents?
You can download and install these PNG Components from GetIt Package Manager with one click.
You will get 5 different PNG components:
TPngSpeedButton
TPngBitBtn
TPngImageList
TPngImageCollection
TPngCheckListBox
Each component has several properties to handle PNG files easily.
What sort of things are included on the Awesome Pascal List?
As the official read.me says “Note that only open-source projects are considered. Dead projects (not updated for 3 years or more) must be really awesome or unique to be included“. The list also allows readers to issue Pull requests for any cool or useful libraries and Delphi projects not already included.
Below is a screen shot from “Lazy Delphi Builder” which is just one of the great projects on the list.
Almost everything on the list is a gem and they’re all specifically aimed at Delphi, Object Pascal and Pascal programmers.
What sort of things are on the list?
The list contains general libraries, multimedia, game development, communications (including networking and lower level things like serial ports), GUI, database and a whole raft of non visual classes and utilities.
There are other libraries and utilities too including DUnit, DUnix and DelphiSpec for automated testing.
The image shows DelphiSpec running so you can a get a feel for what it does. In couple of words it is a library for running automated tests written in plain language.
Where can I read more?
It’s not possible to show all of the features of Awesome Pascal in one article, so if you want to check them out, please visit the link below to the github article: https://github.com/Fr0sT-Brutal/awesome-pascal
The rapid and unexpected transition to work from home is one of the biggest issues affecting companies of all sizes and industries in 2020. As companies now take a brief pause after the mad rush during the first half of the year, they must take an honest look at their security posture to ensure that their intellectual property, employee and customer data, applications, and infrastructure are all being protected and that plans are in place to continue doing so in the future, given many companies will operate very differently going forward.
Security teams are facing challenges they have never experienced before
The exponential growth in remote users, combined with accelerated digital transformation efforts involving migration of applications and data to the cloud, has changed and expanded the attack surface for today’s organizations. Attacks and breaches have continued to be a danger to companies throughout the pandemic. Security teams are challenged to piece together solutions to detect and eradicate threats across multiple types of environments with solutions made up of technologies from multiple vendors, many of which were only designed to operate in legacy environments preceding the cloud era. Integration complexities, a lack of qualified security resources, and an unrelenting wave of attacks from cybercriminals make securing the organization a seemingly unattainable goal.
Today’s security reality is less than ideal in many cases
BlueVoyant speaks with a lot of companies about their security technology deployment. One of the main trends found is that they have accumulated a bunch of hardware and software over the years and are trying to make use of it somehow, but at the end of the day, they struggle to get it all to work together properly. Research has shown that this situation (commonly known as “tech sprawl”) can oftentimes result in a company being more exposed to attack than it realizes, as failing to correctly integrate various pieces of hardware and software can create gaps that allow cyber attackers to get in.
In addition to dealing with tech sprawl, IT and security teams are being asked to participate in digital transformation initiatives at their companies. These initiatives almost always involve moving large amounts of applications and data to the cloud to reap the benefits of lower infrastructure costs, greater flexibility, and on-demand scalability. Legacy security technologies simply don’t work in these new cloud environments.
How do you solve this problem?
What is the solution to eliminating the pain associated with tech sprawl while also providing the security your company needs in a cloud-first world? We believe that a cloud-native, fully integrated security solution is what companies need to operate safely in today’s dangerous cyber environment. To bring our vision to life, we are adopting Microsoft security technologies to build managed solutions that extend detection and threat eradication capabilities across a customer’s entire ecosystem, leveraging tools and integrations already included with a customer’s Microsoft 365 license. Our Managed Microsoft Security Services combine the design, deployment, 24x7x365 threat detection, and over 500 proprietary detection rules—designed and built on Microsoft-powered security technology—to provide the business and technology outcomes needed by our customers.
How does integrated Microsoft security technology work?
Here is an example of the integrated Microsoft security technology working together to successfully detect and eradicate a cyber threat:
A phishing email is received by a user on a managed endpoint.
Office 365 Security and Compliance Center provides visibility into the phishing attempt, and Defender for Office 365 Safe Links evaluates the link at the time-of-delivery to search for malicious or suspicious content. It finds nothing out of the ordinary and allows the message to be delivered to the user’s inbox. The end user opens the email and clicks the link. Defender for Office 365 again scans the link using Safe Links and finds a malicious file on the page that is linked. The user is presented with a webpage, warning them that the site may be malicious.
Since the user believes the email came from someone they know, they bypass the warning message and visit the link where malware gets downloaded to their machine in the background, causing a compromise that allows for elevated access on the endpoint.
Defender for Endpoint detects this and quarantines the file based on zero-day and runtime detections. It surfaces alerts that include insights into the threat and detailed information about events happening on the machine to the security team in the security operations center (SOC) dashboards.
Azure Active Directory Identity Protection sends additional compromise/threat escalation data to Microsoft Cloud App Security. Threat aggregation is calculated against machine learning normalization to assess threat severity.
Azure Sentinel conducts additional correlation analysis and follows a remediation playbook based on severity and aggregated threat calculation.
Remediation workflows revoke the user’s multi-factor authentication (MFA) token, triggering unified endpoint management (UEM) device compliance failure to revoke access grants in Conditional Access.
SOC analysts and end user compute staff confirm remediations before restoring access.
Who is BlueVoyant
BlueVoyant was co-founded in 2017 and is led by several former Fortune 500 executives and government intelligence leaders. We recruit and retain top talent from the FBI, NSA, Unit 8200, GCHQ, and from leading private sector security firms. While we’re still a young company, our expertise in delivering Managed Microsoft Security Services to our customers is already well established. For example, in the recent “Forrester Wave: Midsize Managed Security Services Providers, Q3 2020” report, we were the only company highlighted for our experience in working with Azure Sentinel.
In addition to the existing portfolio of security services we offer today, we are always on the lookout for new ways to provide increased value to our customers who prefer Microsoft-powered security services. We are excited to announce that we acquired Managed Sentinel, a company specializing in Azure Sentinel and Microsoft 365 Defender deployments. By acquiring Managed Sentinel, BlueVoyant strengthens its ability to serve Microsoft customers globally. This allows Managed Sentinel to leverage BlueVoyant’s threat intelligence and managed detection and response (MDR) capabilities, enabling both BlueVoyant and Managed Sentinel to deliver full-service offerings for Microsoft security technologies from customized deployments, ongoing maintenance, to 24/7 security operations.
According to Mandana Javaheri, Director of Business Strategy, CSG Business Development, Microsoft, “The Managed Sentinel acquisition by BlueVoyant further expands their cybersecurity services capabilities to provide customers the consultative, advisory, and implementation expertise needed to fully maximize the value and adoption of Microsoft’s security product portfolio.”
To learn more about the Microsoft Intelligent Security Association (MISA), visit our website, where you can learn about the MISA program, product integrations and find MISA members. Visit the video playlist to learn about the strength of member integrations with Microsoft products.
To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.
As seen in recent sophisticated cyberattacks, especially human-operated campaigns, it’s critical to not only detect an attack as early as possible but also to rapidly determine the scope of the compromise and predict how it will progress. How an attack proceeds depends on the attacker’s goals and the set of tactics, techniques, and procedures (TTPs) that they utilize to achieve these goals. Hence, quickly associating observed behaviors and characteristics to threat actors provides important insights that can empower organizations to better respond to attacks.
At Microsoft, we use statistical methods to improve our ability to track specific threat actors and the TTPs associated with them. Threat actor tracking is a constant arms race: as defenders implement new detection and mitigation methods, attackers are quick to modify techniques and behaviors to evade detection or attribution. Manually mapping specific indicators like files, IP addresses, or known techniques to threat actors and keeping track of changes over time isn’t effective or scalable.
To tackle this challenge, we built probabilistic models that enable us to quickly predict the likely threat group responsible for an attack, as well as the likely next attack stages. With these models, security analysts can move from a manual method of investigating small sets of disparate signals to probabilistic determinations of likely threat groups based on all activity observed, comparing the activity against all known behaviors, both past and present, encoded in the model. These models help threat intelligence teams stay current on threat actor activity and help analysts quickly identify behaviors they need to analyze when investigating an attack.
In this blog we’ll outline a probabilistic graphical modeling framework used by Microsoft 365 Defender research and intelligence teams for threat actor tracking. Microsoft Threat Experts, our managed threat hunting service, utilizes this model to enhance our ability to quickly notify customers about attacks in their environments through targeted attack notifications. These notifications provide technical information and remediation guidance designed to empower customers to identify and mitigate critical threats in their environments.
The model enriches targeted attack notifications with additional context on the threat, the likely attacker and their motivation, the steps the said attacker is likely to make next, and the immediate action the customer can take to contain and remediate the attack. Below we discuss an incident in which automated threat actor tracking translated to real-world protection against a human-operated ransomware attack.
Predicting human-operated ransomware groups
The probabilistic model we discuss in this blog aids Microsoft Threat Experts analysts in sending quick, context-rich, threat actor-attributed notification to customers in the earliest stages of attacks. In one recent case, for example, the model surfaced high-confidence data indicating initial stages of a new ransomware actor in an organization just two minutes into the attack. This enabled analysts to quickly confirm the malicious behavior and the involved threat group, then send a targeted attack notification to the customer, who was able stop the threat before attackers can encrypt data and ask for ransom:
The attacker compromises a device via Remote Desktop. This signal, one of many, starts the examination of the attack by the model, which knows that initial access via Remote Desktop is a technique often utilized by a certain threat actor.
Attackers copy common open-source tools and custom payloads to the device for such malicious activities as tampering with AV and credential theft, which would allow discovery and lateral movement. With these tools on the device, the model’s confidence increases.
The attacker begins running the tools and exhibiting behaviors typically associated with attacks by the threat actor.
Just two minutes into the attack, the model hits a threshold for activity that indicates the suspected threat actor is present in the organization.
Microsoft Threat Experts analysts are notified of the suspected actor activity identified by model, and they quickly send a high-context targeted attack notification that includes technical information as well as actor attribution.
As the attacker was attempting to tamper with the antivirus solution, the organization stops the attack, armed with the knowledge of the likely forthcoming activity they need to stop. The threat actor is stopped from performing their other known TTPs, ultimately preventing the ransomware deployment and activation.
Figure 1. Model predicting human-operated ransomware attack chain
Through the automated threat actor tracking model, Microsoft Threat Experts analysts were able to equip the organization with information about the attack as it was unfolding. The model-enriched targeted attack notification enabled the customer to stop a known human-operated ransomware group before they could cause significant damage. If not stopped, the threat actor would have been able to perform its typical behaviors, including clearing of event logs, creating a persistence method, disabling and deleting backups and recovery options for the device, and encryption and ransom.
Threat actor tracking through probabilistic graphical modeling
As the case study above shows, the ability to identify attacks with high confidence in the early stages is improved by rapidly associating malicious behaviors with threat actors. Using a probabilistic model to predict the likely threat actor behind an attack removes the need for analysts to manually evaluate and compare techniques and tools with known behaviors with threat groups.
Even with attackers frequently adjusting their toolkits, payloads, and techniques to evade detection, the model can help analysts learn new TTPs and then rapidly evaluate the behaviors to confirm the model’s prediction. This intelligence allows pivoting to find recently created attacker infrastructure and tools, and increases the ability to report, detect, slow, and stop the adversary.
In the next sections, we will provide more detail about this automated threat actor tracking model and discuss challenges, such as data collection and tagging. We will also share how we leverage security analyst expertise to continuously enrich these models with newfound attacker behavior and improve its ability to surface incidents with high confidence.
Data collection
The first challenge in threat prediction is translating data collected from recorded attacks into a set of well-defined TTPs. The idea is to define a knowledge base such that the approach is generalizable across different threat actor groups. For this purpose, we use the MITRE ATT&CK framework, which provides such a knowledge base and is widely used across the industry for classifying attack behaviors and understanding the lifecycle of an attack.
Attack behaviors need to be carefully mapped at the right level of granularity. If the behaviors are mapped to too broad a category (e.g., MITRE ATT&CK techniques like lateral movement), then discrete attackers cannot be distinguished. If the attack behaviors are too specific (e.g., documented adversary use of a specific file hash) any subtle changes to the behavior or tools used for a particular attack could be missed.
The model uses threat data from Microsoft Defender for Endpoint, as well as the broader Microsoft 365 Defender, which delivers unparalleled cross-domain visibility into attacks. Incidents, which are collections of alerts related to a specific attack, that have been tagged as associated with a threat group correspond to a training sample. These incidents are augmented with more specific indicators of compromise, custom behavioral detections built by our threat hunting teams, and additional context from telemetry. This collection of alerts and detections are then mapped to the collection of TTPs being tracked.
The TTPs are used as variables in a Bayesian network model, which is a statistical model well suited for handling the challenges of our specific problem, including high dimensionality, interdependencies between TTPs, and missing or uncertain data.
Bayesian networks
Given TTPs of an attack observed in an organization, the goal is to identify the most likely threat actor involved and, consequently, the next attack stages, considering that any one TTP very rarely provides enough evidence to attribute an attack to a threat group. It’s the combination of these TTPs that provides the necessary evidence to identify the threat group.
We use Bayesian networks to model the relationship of TTPs and threat groups. Bayesian networks are a powerful tool that builds a joint distribution over a set of variables and encodes the relationship between them, which can be represented as a directed acyclic graph. Bayesian networks have properties that make them well-suited for this problem. For one, they are ideal for querying probabilities for a subset of unobserved variables (e.g., attacker groups) in the presence of other observed variables (TTPs). They are also ideal for handling missing or sparse data. Finally, using Bayesian models provides a principled approach to encoding expert knowledge through prior probability distributions that encode one’s belief about the quantity of interest before data is considered. With these properties, Bayesian networks have been shown to work well in correlating alerts from various detection systems and predicting future attack stages.[i][ii]
More formally, the set of possible TTPs for an actor are viewed as discrete random variables. Let X = {X1, …, Xn}, where each variable can take on one of two states, 0 or 1. The value of 1 corresponds to the TTP having been observed. Let the random variable Y correspond to the indicator variable for a specific threat actor or group of threat actors. Each variable is a node in a directed acyclic graph and the edges between the nodes encode the conditional dependencies between them.
A Bayesian network defines a joint distribution over the set of TTPs and threat actor group, so that:
P(X1, …, Xn, Y) = P(Y|Pa(Y)) ∏j=1…n P(Xi|Pa(Xi)),
where P(X1, …, Xn, Y) denotes the joint probability of the variables and threat actor group taking on specific values, P(Xi) denotes the set of parents of variable Xi in the graph, and P(Xi|Pa(Xi)) the probability that variable Xi takes on a certain value given (represented by |) the state of its parents in the graph. The conditional probabilities of observing a node being 0 or 1 given the set of parent states are represented by conditional probability tables.
Figure 2 shows a toy example where the variable Actor:X corresponds to the threat actor group, with six TTPs inspired by the MITRE ATT&CK framework, including T1570 (Lateral Tool Transfer), T1046 (Network Service Scanning), T1021 (Remote Services), T1562.001 (Impair Defenses: Disable or Modify Tools), T1543 (Create or Modify System Process), and Impact (TA0040; in this example, we do not specify the sub-technique, though that could easily be done). To illustrate, a directed edge between Transfer Tools and Actor:X indicates that the likelihood of observing the actor is directly related to whether we saw them transfer their attack tools. The node Disable Tools shows an example of a conditional probability table and how the probability of observing the technique changes with respect to the states of its parent nodes in the graph, Network Scanning and Transfer Tools.
Figure 2: A toy example showing a Bayesian network for Actor:X with six TTPs. A conditional probability table is also shown for variable Disable Security.
There are two inference tasks that are needed to fully specify the Bayesian network:
Structure learning: Given a set of training examples, estimate the graph that captures the dependencies between the variables.
Parameter learning: Given a set of training examples and the graph structure, learn the unknown parameters for the conditional probability tables P(Xi|Pa(Xi)).
Structure learning is largely driven by domain knowledge and eliciting expert feedback, which is covered in the next section. Parameter learning is done in the usual Bayesian way, where a prior distribution is specified for the unknown parameters, which can encode subject matter expertise. Then, the parameters are updated with data or new incidents as they arise, so that the final posterior probabilities reflect the prior beliefs from threat intelligence analysts and relevant evidence seen in the data. As new training data is obtained over time as part of hunting and investigations, the Bayesian network can easily be updated so that it always reflects the latest information on the threat actor TTPs.
Because the Bayesian network defines a complete model for the variables and their relationships, it allows the analysts to query for information about any subset of variables and receive probabilistic responses. For example:
Given Transfer of Tools and Disable Security Tools have been observed but not Modify System Process, what is the topmost likely set of TTPs that will be observed next?
Given Lateral Movement has been observed, what is the likelihood of seeing Impact?
Given Network Scanning and Modify System Process, what is the probability that it is threat actor group Actor:X?
This model is particularly useful for its ability to marginalize over unobserved variables. For example, if one does not have enough confidence to say whether Impact occurred or not, one can sum over all possible states for that variable and still be able to answer any of the questions above, providing a probabilistic response that reflects that uncertainty.
Finally, the interpretability of these graphical models is high. Analysts can readily see how observing certain techniques directly changes the probability of observing a threat actor or other techniques through the conditional probability tables. In addition, the graph allows easy visualization of how the techniques relate to each other and influence the variable representing the threat actor group.
Threat intelligence elicitation
The combination of minimal training examples with the high dimensionality of the set of possible techniques makes it critical to leverage domain knowledge and threat intelligence expertise.
Our statisticians work closely with threats analysts to incorporate the analysts’ large existing knowledge base into the model. Analysts help with learning the structure of the Bayesian network by informing which nodes are likely a-priori to be correlated with each other. For instance, analysts might suggest that they often see Network Scanning followed by Lateral Movement. As we are largely concerned with post-breach attacks, the attack chain defines an inherent sequence of stages that are observed as an attacks progress, such as moving from gaining access to exploitation. This sequencing can help inform the orientation of the edges. Any remaining possible edges are learned from the training examples using one of the structure learning algorithms.[iii]
Once the attack graph is fully specified, the threat analysts help inform the strength of the relationships between the nodes (e.g., how much more likely it is to see Disabling Security Tools given Transfer Tools); this data is encoded in the prior to complete the specification of the model.
Finally, as a threat group changes their behavior over time, new nodes corresponding to new TTPs may need to be added or removed from the graph. This can be done by setting priors based on information from threat intelligence experts and using the alert database to assess correlations with other techniques already in the graph.
Figure 3 illustrates the expert-augmented probabilistic graphical modeling framework. Applying probabilistic learning over these constructed graphs, built from both data collected from real attacks and the vast knowledge of the threat intelligence community, provides a framework for both predicting the likely threat actor and predicting how an attack might evolve.
Figure 3. Sketch of framework
Conclusion
Across Microsoft, we use statistical models and machine learning to uncover threats hidden in billions of low-fidelity signals. The threat actor tracking model we introduced in this blog is exciting work with real impact in customer protection. We are still in the early stages of realizing the value of this approach, yet we already have had much success, especially in detecting and informing customers about human-operated attacks, which are some of the most prevalent and impactful threats today.
A core reason for this success is the combination of statistical expertise, threat hunting, and the very intensive work of vetting and discovering the combination of TTPs that indicate specific threat groups. Our ability to automatically identify threat actors from the data, predict next steps, and stop attacks is foundational for much of our work going forward, with many as-yet unrealized benefits in customer protection. In real terms, we have accelerated threat hunting to drive to conclusions that lead to real protection, and we will continue expanding that protection for our customers through the Microsoft Threat Experts service and the coordinated defense delivered by Microsoft 365 Defender.
Cole Sodja, Justin Carroll, Melissa Turcotte, Joshua Neil
Iterated Local Search is a stochastic global optimization algorithm.
It involves the repeated application of a local search algorithm to modified versions of a good solution found previously. In this way, it is like a clever version of the stochastic hill climbing with random restarts algorithm.
The intuition behind the algorithm is that random restarts can help to locate many local optima in a problem and that better local optima are often close to other local optima. Therefore modest perturbations to existing local optima may locate better or even best solutions to an optimization problem.
In this tutorial, you will discover how to implement the iterated local search algorithm from scratch.
After completing this tutorial, you will know:
Iterated local search is a stochastic global search optimization algorithm that is a smarter version of stochastic hill climbing with random restarts.
How to implement stochastic hill climbing with random restarts from scratch.
How to implement and apply the iterated local search algorithm to a nonlinear objective function.
Let’s get started.
Iterated Local Search From Scratch in Python Photo by Susanne Nilsson, some rights reserved.
Tutorial Overview
This tutorial is divided into five parts; they are:
What Is Iterated Local Search
Ackley Objective Function
Stochastic Hill Climbing Algorithm
Stochastic Hill Climbing With Random Restarts
Iterated Local Search Algorithm
What Is Iterated Local Search
Iterated Local Search, or ILS for short, is a stochastic global search optimization algorithm.
It is related to or an extension of stochastic hill climbing and stochastic hill climbing with random starts.
It’s essentially a more clever version of Hill-Climbing with Random Restarts.
Stochastic hill climbing is a local search algorithm that involves making random modifications to an existing solution and accepting the modification only if it results in better results than the current working solution.
Local search algorithms in general can get stuck in local optima. One approach to address this problem is to restart the search from a new randomly selected starting point. The restart procedure can be performed many times and may be triggered after a fixed number of function evaluations or if no further improvement is seen for a given number of algorithm iterations. This algorithm is called stochastic hill climbing with random restarts.
The simplest possibility to improve upon a cost found by LocalSearch is to repeat the search from another starting point.
Iterated local search is similar to stochastic hill climbing with random restarts, except rather than selecting a random starting point for each restart, a point is selected based on a modified version of the best point found so far during the broader search.
The perturbation of the best solution so far is like a large jump in the search space to a new region, whereas the perturbations made by the stochastic hill climbing algorithm are much smaller, confined to a specific region of the search space.
The heuristic here is that you can often find better local optima near to the one you’re presently in, and walking from local optimum to local optimum in this way often outperforms just trying new locations entirely at random.
This allows the search to be performed at two levels. The hill climbing algorithm is the local search for getting the most out of a specific candidate solution or region of the search space, and the restart approach allows different regions of the search space to be explored.
In this way, the algorithm Iterated Local Search explores multiple local optima in the search space, increasing the likelihood of locating the global optima.
The Iterated Local Search was proposed for combinatorial optimization problems, such as the traveling salesman problem (TSP), although it can be applied to continuous function optimization by using different step sizes in the search space: smaller steps for the hill climbing and larger steps for the random restart.
Now that we are familiar with the Iterated Local Search algorithm, let’s explore how to implement the algorithm from scratch.
Ackley Objective Function
First, let’s define a channeling optimization problem as the basis for implementing the Iterated Local Search algorithm.
The Ackley function is an example of a multimodal objective function that has a single global optima and multiple local optima in which a local search might get stuck.
As such, a global optimization technique is required. It is a two-dimensional objective function that has a global optima at [0,0], which evaluates to 0.0.
The example below implements the Ackley and creates a three-dimensional surface plot showing the global optima and multiple local optima.
# ackley multimodal function
from numpy import arange
from numpy import exp
from numpy import sqrt
from numpy import cos
from numpy import e
from numpy import pi
from numpy import meshgrid
from matplotlib import pyplot
from mpl_toolkits.mplot3d import Axes3D
# objective function
def objective(x, y):
return -20.0 * exp(-0.2 * sqrt(0.5 * (x**2 + y**2))) - exp(0.5 * (cos(2 * pi * x) + cos(2 * pi * y))) + e + 20
# define range for input
r_min, r_max = -5.0, 5.0
# sample input range uniformly at 0.1 increments
xaxis = arange(r_min, r_max, 0.1)
yaxis = arange(r_min, r_max, 0.1)
# create a mesh from the axis
x, y = meshgrid(xaxis, yaxis)
# compute targets
results = objective(x, y)
# create a surface plot with the jet color scheme
figure = pyplot.figure()
axis = figure.gca(projection='3d')
axis.plot_surface(x, y, results, cmap='jet')
# show the plot
pyplot.show()
Running the example creates the surface plot of the Ackley function showing the vast number of local optima.
3D Surface Plot of the Ackley Multimodal Function
We will use this as the basis for implementing and comparing a simple stochastic hill climbing algorithm, stochastic hill climbing with random restarts, and finally iterated local search.
We would expect a stochastic hill climbing algorithm to get stuck easily in local minima. We would expect stochastic hill climbing with restarts to find many local minima, and we would expect iterated local search to perform better than either method on this problem if configured appropriately.
Stochastic Hill Climbing Algorithm
Core to the Iterated Local Search algorithm is a local search, and in this tutorial, we will use the Stochastic Hill Climbing algorithm for this purpose.
The Stochastic Hill Climbing algorithm involves first generating a random starting point and current working solution, then generating perturbed versions of the current working solution and accepting them if they are better than the current working solution.
Given that we are working on a continuous optimization problem, a solution is a vector of values to be evaluated by the objective function, in this case, a point in a two-dimensional space bounded by -5 and 5.
We can generate a random point by sampling the search space with a uniform probability distribution. For example:
...
# generate a random point in the search space
solution = bounds[:, 0] + rand(len(bounds)) * (bounds[:, 1] - bounds[:, 0])
We can generate perturbed versions of a currently working solution using a Gaussian probability distribution with the mean of the current values in the solution and a standard deviation controlled by a hyperparameter that controls how far the search is allowed to explore from the current working solution.
We will refer to this hyperparameter as “step_size“, for example:
...
# generate a perturbed version of a current working solution
candidate = solution + randn(len(bounds)) * step_size
Importantly, we must check that generated solutions are within the search space.
This can be achieved with a custom function named in_bounds() that takes a candidate solution and the bounds of the search space and returns True if the point is in the search space, False otherwise.
# check if a point is within the bounds of the search
def in_bounds(point, bounds):
# enumerate all dimensions of the point
for d in range(len(bounds)):
# check if out of bounds for this dimension
if point[d] < bounds[d, 0] or point[d] > bounds[d, 1]:
return False
return True
This function can then be called during the hill climb to confirm that new points are in the bounds of the search space, and if not, new points can be generated.
Tying this together, the function hillclimbing() below implements the stochastic hill climbing local search algorithm. It takes the name of the objective function, bounds of the problem, number of iterations, and steps size as arguments and returns the best solution and its evaluation.
# hill climbing local search algorithm
def hillclimbing(objective, bounds, n_iterations, step_size):
# generate an initial point
solution = None
while solution is None or not in_bounds(solution, bounds):
solution = bounds[:, 0] + rand(len(bounds)) * (bounds[:, 1] - bounds[:, 0])
# evaluate the initial point
solution_eval = objective(solution)
# run the hill climb
for i in range(n_iterations):
# take a step
candidate = None
while candidate is None or not in_bounds(candidate, bounds):
candidate = solution + randn(len(bounds)) * step_size
# evaluate candidate point
candidte_eval = objective(candidate)
# check if we should keep the new point
if candidte_eval <= solution_eval:
# store the new point
solution, solution_eval = candidate, candidte_eval
# report progress
print('>%d f(%s) = %.5f' % (i, solution, solution_eval))
return [solution, solution_eval]
We can test this algorithm on the Ackley function.
We will fix the seed for the pseudorandom number generator to ensure we get the same results each time the code is run.
The algorithm will be run for 1,000 iterations and a step size of 0.05 units will be used; both hyperparameters were chosen after a little trial and error.
At the end of the run, we will report the best solution found.
...
# seed the pseudorandom number generator
seed(1)
# define range for input
bounds = asarray([[-5.0, 5.0], [-5.0, 5.0)
# define the total iterations
n_iterations = 1000
# define the maximum step size
step_size = 0.05
# perform the hill climbing search
best, score = hillclimbing(objective, bounds, n_iterations, step_size)
print('Done!')
print('f(%s) = %f' % (best, score))
Tying this together, the complete example of applying the stochastic hill climbing algorithm to the Ackley objective function is listed below.
# hill climbing search of the ackley objective function
from numpy import asarray
from numpy import exp
from numpy import sqrt
from numpy import cos
from numpy import e
from numpy import pi
from numpy.random import randn
from numpy.random import rand
from numpy.random import seed
# objective function
def objective(v):
x, y = v
return -20.0 * exp(-0.2 * sqrt(0.5 * (x**2 + y**2))) - exp(0.5 * (cos(2 * pi * x) + cos(2 * pi * y))) + e + 20
# check if a point is within the bounds of the search
def in_bounds(point, bounds):
# enumerate all dimensions of the point
for d in range(len(bounds)):
# check if out of bounds for this dimension
if point[d] < bounds[d, 0] or point[d] > bounds[d, 1]:
return False
return True
# hill climbing local search algorithm
def hillclimbing(objective, bounds, n_iterations, step_size):
# generate an initial point
solution = None
while solution is None or not in_bounds(solution, bounds):
solution = bounds[:, 0] + rand(len(bounds)) * (bounds[:, 1] - bounds[:, 0])
# evaluate the initial point
solution_eval = objective(solution)
# run the hill climb
for i in range(n_iterations):
# take a step
candidate = None
while candidate is None or not in_bounds(candidate, bounds):
candidate = solution + randn(len(bounds)) * step_size
# evaluate candidate point
candidte_eval = objective(candidate)
# check if we should keep the new point
if candidte_eval <= solution_eval:
# store the new point
solution, solution_eval = candidate, candidte_eval
# report progress
print('>%d f(%s) = %.5f' % (i, solution, solution_eval))
return [solution, solution_eval]
# seed the pseudorandom number generator
seed(1)
# define range for input
bounds = asarray([[-5.0, 5.0], [-5.0, 5.0)
# define the total iterations
n_iterations = 1000
# define the maximum step size
step_size = 0.05
# perform the hill climbing search
best, score = hillclimbing(objective, bounds, n_iterations, step_size)
print('Done!')
print('f(%s) = %f' % (best, score))
Running the example performs the stochastic hill climbing search on the objective function. Each improvement found during the search is reported and the best solution is then reported at the end of the search.
Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.
In this case, we can see about 13 improvements during the search and a final solution of about f(-0.981, 1.965), resulting in an evaluation of about 5.381, which is far from f(0.0, 0.0) = 0.
Next, we will modify the algorithm to perform random restarts and see if we can achieve better results.
Stochastic Hill Climbing With Random Restarts
The Stochastic Hill Climbing With Random Restarts algorithm involves the repeated running of the Stochastic Hill Climbing algorithm and keeping track of the best solution found.
First, let’s modify the hillclimbing() function to take the starting point of the search rather than generating it randomly. This will help later when we implement the Iterated Local Search algorithm later.
# hill climbing local search algorithm
def hillclimbing(objective, bounds, n_iterations, step_size, start_pt):
# store the initial point
solution = start_pt
# evaluate the initial point
solution_eval = objective(solution)
# run the hill climb
for i in range(n_iterations):
# take a step
candidate = None
while candidate is None or not in_bounds(candidate, bounds):
candidate = solution + randn(len(bounds)) * step_size
# evaluate candidate point
candidte_eval = objective(candidate)
# check if we should keep the new point
if candidte_eval <= solution_eval:
# store the new point
solution, solution_eval = candidate, candidte_eval
return [solution, solution_eval]
Next, we can implement the random restart algorithm by repeatedly calling the hillclimbing() function a fixed number of times.
Each call, we will generate a new randomly selected starting point for the hill climbing search.
...
# generate a random initial point for the search
start_pt = None
while start_pt is None or not in_bounds(start_pt, bounds):
start_pt = bounds[:, 0] + rand(len(bounds)) * (bounds[:, 1] - bounds[:, 0])
# perform a stochastic hill climbing search
solution, solution_eval = hillclimbing(objective, bounds, n_iter, step_size, start_pt)
We can then inspect the result and keep it if it is better than any result of the search we have seen so far.
...
# check for new best
if solution_eval < best_eval:
best, best_eval = solution, solution_eval
print('Restart %d, best: f(%s) = %.5f' % (n, best, best_eval))
Tying this together, the random_restarts() function implemented the stochastic hill climbing algorithm with random restarts.
# hill climbing with random restarts algorithm
def random_restarts(objective, bounds, n_iter, step_size, n_restarts):
best, best_eval = None, 1e+10
# enumerate restarts
for n in range(n_restarts):
# generate a random initial point for the search
start_pt = None
while start_pt is None or not in_bounds(start_pt, bounds):
start_pt = bounds[:, 0] + rand(len(bounds)) * (bounds[:, 1] - bounds[:, 0])
# perform a stochastic hill climbing search
solution, solution_eval = hillclimbing(objective, bounds, n_iter, step_size, start_pt)
# check for new best
if solution_eval < best_eval:
best, best_eval = solution, solution_eval
print('Restart %d, best: f(%s) = %.5f' % (n, best, best_eval))
return [best, best_eval]
We can then apply this algorithm to the Ackley objective function. In this case, we will limit the number of random restarts to 30, chosen arbitrarily.
The complete example is listed below.
# hill climbing search with random restarts of the ackley objective function
from numpy import asarray
from numpy import exp
from numpy import sqrt
from numpy import cos
from numpy import e
from numpy import pi
from numpy.random import randn
from numpy.random import rand
from numpy.random import seed
# objective function
def objective(v):
x, y = v
return -20.0 * exp(-0.2 * sqrt(0.5 * (x**2 + y**2))) - exp(0.5 * (cos(2 * pi * x) + cos(2 * pi * y))) + e + 20
# check if a point is within the bounds of the search
def in_bounds(point, bounds):
# enumerate all dimensions of the point
for d in range(len(bounds)):
# check if out of bounds for this dimension
if point[d] < bounds[d, 0] or point[d] > bounds[d, 1]:
return False
return True
# hill climbing local search algorithm
def hillclimbing(objective, bounds, n_iterations, step_size, start_pt):
# store the initial point
solution = start_pt
# evaluate the initial point
solution_eval = objective(solution)
# run the hill climb
for i in range(n_iterations):
# take a step
candidate = None
while candidate is None or not in_bounds(candidate, bounds):
candidate = solution + randn(len(bounds)) * step_size
# evaluate candidate point
candidte_eval = objective(candidate)
# check if we should keep the new point
if candidte_eval <= solution_eval:
# store the new point
solution, solution_eval = candidate, candidte_eval
return [solution, solution_eval]
# hill climbing with random restarts algorithm
def random_restarts(objective, bounds, n_iter, step_size, n_restarts):
best, best_eval = None, 1e+10
# enumerate restarts
for n in range(n_restarts):
# generate a random initial point for the search
start_pt = None
while start_pt is None or not in_bounds(start_pt, bounds):
start_pt = bounds[:, 0] + rand(len(bounds)) * (bounds[:, 1] - bounds[:, 0])
# perform a stochastic hill climbing search
solution, solution_eval = hillclimbing(objective, bounds, n_iter, step_size, start_pt)
# check for new best
if solution_eval < best_eval:
best, best_eval = solution, solution_eval
print('Restart %d, best: f(%s) = %.5f' % (n, best, best_eval))
return [best, best_eval]
# seed the pseudorandom number generator
seed(1)
# define range for input
bounds = asarray([[-5.0, 5.0], [-5.0, 5.0)
# define the total iterations
n_iter = 1000
# define the maximum step size
step_size = 0.05
# total number of random restarts
n_restarts = 30
# perform the hill climbing search
best, score = random_restarts(objective, bounds, n_iter, step_size, n_restarts)
print('Done!')
print('f(%s) = %f' % (best, score))
Running the example will perform a stochastic hill climbing with random restarts search for the Ackley objective function. Each time an improved overall solution is discovered, it is reported and the final best solution found by the search is summarized.
Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.
In this case, we can see three improvements during the search and that the best solution found was approximately f(0.002, 0.002), which evaluated to about 0.009, which is much better than a single run of the hill climbing algorithm.
Next, let’s look at how we can implement the iterated local search algorithm.
Iterated Local Search Algorithm
The Iterated Local Search algorithm is a modified version of the stochastic hill climbing with random restarts algorithm.
The important difference is that the starting point for each application of the stochastic hill climbing algorithm is a perturbed version of the best point found so far.
We can implement this algorithm by using the random_restarts() function as a starting point. Each restart iteration, we can generate a modified version of the best solution found so far instead of a random starting point.
This can be achieved by using a step size hyperparameter, much like is used in the stochastic hill climber. In this case, a larger step size value will be used given the need for larger perturbations in the search space.
...
# generate an initial point as a perturbed version of the last best
start_pt = None
while start_pt is None or not in_bounds(start_pt, bounds):
start_pt = best + randn(len(bounds)) * p_size
Tying this together, the iterated_local_search() function is defined below.
# iterated local search algorithm
def iterated_local_search(objective, bounds, n_iter, step_size, n_restarts, p_size):
# define starting point
best = None
while best is None or not in_bounds(best, bounds):
best = bounds[:, 0] + rand(len(bounds)) * (bounds[:, 1] - bounds[:, 0])
# evaluate current best point
best_eval = objective(best)
# enumerate restarts
for n in range(n_restarts):
# generate an initial point as a perturbed version of the last best
start_pt = None
while start_pt is None or not in_bounds(start_pt, bounds):
start_pt = best + randn(len(bounds)) * p_size
# perform a stochastic hill climbing search
solution, solution_eval = hillclimbing(objective, bounds, n_iter, step_size, start_pt)
# check for new best
if solution_eval < best_eval:
best, best_eval = solution, solution_eval
print('Restart %d, best: f(%s) = %.5f' % (n, best, best_eval))
return [best, best_eval]
We can then apply the algorithm to the Ackley objective function. In this case, we will use a larger step size value of 1.0 for the random restarts, chosen after a little trial and error.
The complete example is listed below.
# iterated local search of the ackley objective function
from numpy import asarray
from numpy import exp
from numpy import sqrt
from numpy import cos
from numpy import e
from numpy import pi
from numpy.random import randn
from numpy.random import rand
from numpy.random import seed
# objective function
def objective(v):
x, y = v
return -20.0 * exp(-0.2 * sqrt(0.5 * (x**2 + y**2))) - exp(0.5 * (cos(2 * pi * x) + cos(2 * pi * y))) + e + 20
# check if a point is within the bounds of the search
def in_bounds(point, bounds):
# enumerate all dimensions of the point
for d in range(len(bounds)):
# check if out of bounds for this dimension
if point[d] < bounds[d, 0] or point[d] > bounds[d, 1]:
return False
return True
# hill climbing local search algorithm
def hillclimbing(objective, bounds, n_iterations, step_size, start_pt):
# store the initial point
solution = start_pt
# evaluate the initial point
solution_eval = objective(solution)
# run the hill climb
for i in range(n_iterations):
# take a step
candidate = None
while candidate is None or not in_bounds(candidate, bounds):
candidate = solution + randn(len(bounds)) * step_size
# evaluate candidate point
candidte_eval = objective(candidate)
# check if we should keep the new point
if candidte_eval <= solution_eval:
# store the new point
solution, solution_eval = candidate, candidte_eval
return [solution, solution_eval]
# iterated local search algorithm
def iterated_local_search(objective, bounds, n_iter, step_size, n_restarts, p_size):
# define starting point
best = None
while best is None or not in_bounds(best, bounds):
best = bounds[:, 0] + rand(len(bounds)) * (bounds[:, 1] - bounds[:, 0])
# evaluate current best point
best_eval = objective(best)
# enumerate restarts
for n in range(n_restarts):
# generate an initial point as a perturbed version of the last best
start_pt = None
while start_pt is None or not in_bounds(start_pt, bounds):
start_pt = best + randn(len(bounds)) * p_size
# perform a stochastic hill climbing search
solution, solution_eval = hillclimbing(objective, bounds, n_iter, step_size, start_pt)
# check for new best
if solution_eval < best_eval:
best, best_eval = solution, solution_eval
print('Restart %d, best: f(%s) = %.5f' % (n, best, best_eval))
return [best, best_eval]
# seed the pseudorandom number generator
seed(1)
# define range for input
bounds = asarray([[-5.0, 5.0], [-5.0, 5.0)
# define the total iterations
n_iter = 1000
# define the maximum step size
s_size = 0.05
# total number of random restarts
n_restarts = 30
# perturbation step size
p_size = 1.0
# perform the hill climbing search
best, score = iterated_local_search(objective, bounds, n_iter, s_size, n_restarts, p_size)
print('Done!')
print('f(%s) = %f' % (best, score))
Running the example will perform an Iterated Local Search of the Ackley objective function.
Each time an improved overall solution is discovered, it is reported and the final best solution found by the search is summarized at the end of the run.
Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.
In this case, we can see four improvements during the search and that the best solution found was two very small inputs that are close to zero, which evaluated to about 0.0003, which is better than either a single run of the hill climber or the hill climber with restarts.
Stephen Ball is a Chartered IT Professional and Embarcadero MVP who has led development teams for over 15 years within the UK, Europe and beyond, working with a range of blue-chip companies. He originally spent six years at Embarcadero as Senior Technical Pre-Sales Engineer, Associate Product Manager for the award-winning InterBase, and Senior Product Marketing Manager. He then spent two years with Nokia, where he defined the companies IoT services strategy for network planning, including furthering the usage of AI within Telco Networks, before returning to the Idera group as World Wide Pre-Sales Director for Embarcadero in 2019.
In his new whitepaper “RAD Studio Guide for Managers”, Stephen looks at how RAD Studio has evolved within the broader history of software development, and how it continues to create new tooling and frameworks and supports the latest development practices and protocols. He looks at market trends impacting software development today, the evolution of cross-platform tools and approaches as an alternative to native code, and new trends such as low-code and no-code options for application development.
Weighing the pros and cons of each of these varied paths, Stephen proceeds to explain why RAD Studio represents the no-compromise option – the IDE that combines the flexibility of cross-platform with the reliability, performance and security of native code. RAD Studio compiles true native code for the latest versions of Windows, iOS, macOS, Android and Linux using a single easy-to-write, easy-to-maintain codebase.
This paper will have a broad appeal for members and leaders of software development teams, and is written for those following or evaluating market trends and possible solutions to use for both desktop and mobile applications.
Ready to read the “Rad Studio Guide For Managers” whitepaper?
Stephen Ball ist ein Chartered IT Professional und Embarcadero MVP, der seit über 15 Jahren Entwicklungsteams in Großbritannien, Europa und darüber hinaus leitet und mit einer Reihe von Blue-Chip-Unternehmen zusammenarbeitet. Ursprünglich war er sechs Jahre bei Embarcadero als Senior Technical Pre-Sales Engineer, Associate Product Manager für die preisgekrönte InterBase und Senior Product Marketing Manager tätig. Anschließend verbrachte er zwei Jahre bei Nokia, wo er die IoT-Servicestrategie des Unternehmens für die Netzwerkplanung definierte, einschließlich der Förderung der Nutzung von KI innerhalb von Telco Networks, bevor er 2019 als World Wide Pre-Sales Director für Embarcadero zur Idera-Gruppe zurückkehrte.
In seinem neuen Whitepaper „ RAD Studio-Handbuch für Manager “ untersucht Stephen, wie sich RAD Studio in der breiteren Geschichte der Softwareentwicklung entwickelt hat und wie es weiterhin neue Tools und Frameworks erstellt und die neuesten Entwicklungspraktiken und -protokolle unterstützt. Er befasst sich mit Markttrends, die sich auf die heutige Softwareentwicklung auswirken, der Entwicklung plattformübergreifender Tools und Ansätze als Alternative zu nativem Code sowie neuen Trends wie Low-Code- und No-Code-Optionen für die Anwendungsentwicklung.
Stephen wägt die Vor- und Nachteile jedes dieser unterschiedlichen Pfade ab und erklärt, warum RAD Studio die kompromisslose Option darstellt – die IDE, die die Flexibilität plattformübergreifender Funktionen mit der Zuverlässigkeit, Leistung und Sicherheit von nativem Code kombiniert. RAD Studio kompiliert echten nativen Code für die neuesten Versionen von Windows, iOS, macOS, Android und Linux mit einer einzigen einfach zu schreibenden und leicht zu wartenden Codebasis.
Dieses Dokument wird eine breite Anziehungskraft auf Mitglieder und Führungskräfte von Softwareentwicklungsteams haben und richtet sich an diejenigen, die Markttrends und mögliche Lösungen für Desktop- und mobile Anwendungen verfolgen oder bewerten.
Sind Sie bereit, das Whitepaper „Rad Studio Guide For Managers“ zu lesen?
Stephen Ball es un profesional de TI colegiado y MVP de Embarcadero que ha liderado equipos de desarrollo durante más de 15 años en el Reino Unido, Europa y más allá, trabajando con una variedad de empresas de primer nivel. Originalmente pasó seis años en Embarcadero como ingeniero técnico senior de preventa, gerente asociado de productos para la galardonada InterBase y gerente senior de marketing de productos. Luego pasó dos años con Nokia, donde definió la estrategia de servicios de IoT de la empresa para la planificación de redes, incluido el fomento del uso de AI dentro de Telco Networks, antes de regresar al grupo Idera como director de preventa mundial de Embarcadero en 2019.
En su nuevo documento técnico “ Guía de RAD Studio para gerentes ”, Stephen analiza cómo RAD Studio ha evolucionado dentro de la historia más amplia del desarrollo de software y cómo continúa creando nuevas herramientas y marcos y apoya las últimas prácticas y protocolos de desarrollo. Analiza las tendencias del mercado que afectan al desarrollo de software actual, la evolución de las herramientas y enfoques multiplataforma como alternativa al código nativo y las nuevas tendencias, como las opciones de código bajo y sin código para el desarrollo de aplicaciones.
Sopesando los pros y los contras de cada una de estas rutas variadas, Stephen procede a explicar por qué RAD Studio representa la opción sin compromiso: el IDE que combina la flexibilidad de la multiplataforma con la confiabilidad, el rendimiento y la seguridad del código nativo. RAD Studio compila código nativo verdadero para las últimas versiones de Windows, iOS, macOS, Android y Linux utilizando una única base de código fácil de escribir y mantener.
Este documento tendrá un gran atractivo para los miembros y líderes de los equipos de desarrollo de software, y está escrito para quienes siguen o evalúan las tendencias del mercado y las posibles soluciones para usar tanto en aplicaciones de escritorio como móviles.
¿Está listo para leer el documento técnico “Rad Studio Guide For Managers”?
Stephen Ball é um Chartered IT Professional e Embarcadero MVP que liderou equipes de desenvolvimento por mais de 15 anos no Reino Unido, Europa e além, trabalhando com uma variedade de empresas de primeira linha. Ele originalmente passou seis anos na Embarcadero como Engenheiro Técnico de Pré-Vendas Sênior, Gerente de Produto Associado para o premiado InterBase e Gerente Sênior de Marketing de Produto. Em seguida, ele passou dois anos na Nokia, onde definiu a estratégia de serviços de IoT da empresa para o planejamento de rede, incluindo a promoção do uso de IA dentro da Telco Networks, antes de retornar ao grupo Idera como Diretor de Pré-vendas Mundial da Embarcadero em 2019.
Em seu novo white paper “ RAD Studio Guide for Managers ”, Stephen analisa como o RAD Studio evoluiu dentro da história mais ampla do desenvolvimento de software e como continua a criar novas ferramentas e estruturas e oferece suporte às práticas e protocolos de desenvolvimento mais recentes. Ele analisa as tendências de mercado que afetam o desenvolvimento de software hoje, a evolução de ferramentas e abordagens de plataforma cruzada como uma alternativa ao código nativo e novas tendências, como opções de baixo código e sem código para o desenvolvimento de aplicativos.
Pesando os prós e os contras de cada um desses caminhos variados, Stephen explica por que o RAD Studio representa a opção sem compromisso – o IDE que combina a flexibilidade da plataforma cruzada com a confiabilidade, desempenho e segurança do código nativo. O RAD Studio compila o código nativo verdadeiro para as versões mais recentes do Windows, iOS, macOS, Android e Linux usando uma única base de código fácil de escrever e manter.
Este documento terá um amplo apelo para membros e líderes de equipes de desenvolvimento de software, e foi escrito para aqueles que seguem ou avaliam as tendências de mercado e as possíveis soluções para uso em aplicativos de desktop e móveis.
Pronto para ler o white paper “Rad Studio Guide For Managers”?
Стивен Болл — дипломированный ИТ-специалист и MVP Embarcadero, который более 15 лет руководил группами разработчиков в Великобритании, Европе и за ее пределами, работая с рядом крупных компаний. Первоначально он проработал шесть лет в Embarcadero в качестве старшего технического инженера по предпродажной подготовке, младшего менеджера по продукту в отмеченной наградами InterBase и старшего менеджера по маркетингу продуктов. Затем он проработал два года в Nokia, где определил стратегию компании IoT-сервисов для планирования сети, включая дальнейшее использование ИИ в Telco Networks, прежде чем вернуться в группу Idera в качестве всемирного предпродажного директора Embarcadero в 2019 году.
В своем новом техническом документе « Руководство по RAD Studio для менеджеров » Стивен рассматривает, как RAD Studio развивалась в рамках более широкой истории разработки программного обеспечения, и как она продолжает создавать новые инструменты и фреймворки и поддерживает новейшие практики и протоколы разработки. Он рассматривает рыночные тенденции, влияющие на разработку программного обеспечения сегодня, эволюцию кроссплатформенных инструментов и подходов в качестве альтернативы собственному коду, а также новые тенденции, такие как варианты разработки приложений с низким кодом и без кода.
Взвесив плюсы и минусы каждого из этих различных путей, Стивен переходит к объяснению, почему RAD Studio представляет собой бескомпромиссный вариант — среду IDE, сочетающую гибкость кросс-платформенности с надежностью, производительностью и безопасностью нативного кода. RAD Studio компилирует настоящий нативный код для последних версий Windows, iOS, macOS, Android и Linux с помощью единой простой для написания и поддержки кодовой базы.
Этот документ будет широко привлекать членов и руководителей групп разработчиков программного обеспечения и предназначен для тех, кто следит за тенденциями рынка или оценивает их и возможные решения для использования как для настольных, так и для мобильных приложений.
Готовы прочитать технический документ «Руководство по Rad Studio для менеджеров»?
There are many data visualization libraries in Python, yet Matplotlib is the most popular library out of all of them. Matplotlib’s popularity is due to its reliability and utility - it's able to create both simple and complex plots with little code. You can also customize the plots in a variety of ways.
In this tutorial, we'll cover how to plot a Pie Chart in Matplotlib.
Pie charts represent data broken down into categories/labels. They're an intuitive and simple way to visualize proportional data - such as percentages.
Plot a Pie Chart in Matplotlib
To plot a pie chart in Matplotlib, we can call the pie() function of the PyPlot or Axes instance.
The only mandatory argument is the data we'd like to plot, such as a feature from a dataset:
import matplotlib.pyplot as plt
x = [15, 25, 25, 30, 5]
fig, ax = plt.subplots()
ax.plot(x)
plt.show()
This generates a rather simple, but plain, Pie Chart with each value being assigned to a proportionally large slice of the pie:
Let's add some labels, so that it's easier to distinguish what's what here:
Now, the Pie Chart will have some additional data that allows us to interpret it a bit easier:
Customizing Pie Charts in Matplotlib
When preparing data visualizations for presentations, papers or simply to share them around with your peers - you might want to stylize and customize them a little bit, such as using different colors, that correlate to the categories, showing percentages on slices, instead of just relying on the visual perception, or exploding slices to highlight them.
Let's take a look at how Matplotlib lets us customize Pie Charts.
Change Pie Chart Colors
To change the colors of a Pie Chart in Matplotlib, we'll need to supply an array of colors to the colors argument, while plotting it:
Here, we've created a really simple correlation between the responses and the colors they're assigned. Very Likely will be blue in the Tableau Palette, while Very Unlikely will be red.
Running this code results in:
Show Percentages on Slices
Looking at the Pie Chart we've made so far, it's clear that there are more Unsure and Likely respondents than other categories individually. Though, it's oftentimes easier for us to both interpret a Pie Chart visually, and numerically.
To add numerical percentages to each slice, we use the autopct argument. It automatically sets the percentages in each wedge/slice, and accepts the standard Python string formatting notation:
By setting autopct to %.0f%%, we've chosen to format the percentages with 0 decimal places (only whole numbers), and added a % sign at the end. If we had omitted the surrounding %..% symbols, the strings wouldn't be formatted as percentages, but as literal values.
Running this code results in:
Explode/Highlight Wedges
Sometimes, it's important to highlight certain entries. For example, in our survey, a really small percentage of the respondents feel like the advent of something in question is Very Unlikely. Assuming that we'd want to point out the fact that most people don't think it's unlikely, we can explode the wedge:
The explode argument accepts an array of values, from 0..1, where the values themselves define how further away the wedge is from the center. By default, all wedges have an explode value of 0, so they're all connected to the center.
Setting this value to 1 would offset it by a lot, relative to the chart, so usually, you'll explode wedges by 0.1, 0.2, 0.3, and similar values. You can explode as many of them as you'd like, with different values to highlight different categories.
Running this code results in:
Adding a Shadow
To add a shadow to a Matplotlib pie chart, all you have to do is set the shadow argument to True:
Finally, you can also rotate the chart, by setting the starting angle. So far, it starts on 0 degrees (right-hand), and populated wedges counter-clockwise. By setting the startangle argument to a number between 0..360, you can make a full circle:
This results in a Pie Chart, rotated by 180 degrees, effectively flipping it to the other side:
Conclusion
In this tutorial, we've gone over how to plot simple Pie Chart in Matplotlib with Python. We've gone over simple Pie Charts, and then dived into how to customize them both for aesthetic and practical uses.
If you're interested in Data Visualization and don't know where to start, make sure to check out our bundle of books on Data Visualization in Python:
Data Visualization in Python with Matplotlib and Pandas is a book designed to take absolute beginners to Pandas and Matplotlib, with basic Python knowledge, and allow them to build a strong foundation for advanced work with theses libraries - from simple plots to animated 3D plots with interactive buttons.
It serves as an in-depth, guide that'll teach you everything you need to know about Pandas and Matplotlib, including how to construct plot types that aren't built into the library itself.
Data Visualization in Python, a book for beginner to intermediate Python developers, guides you through simple data manipulation with Pandas, cover core plotting libraries like Matplotlib and Seaborn, and show you how to take advantage of declarative and experimental libraries like Altair. More specifically, over the span of 11 chapters this book covers 9 Python libraries: Pandas, Matplotlib, Seaborn, Bokeh, Altair, Plotly, GGPlot, GeoPandas, and VisPy.
It serves as a unique, practical guide to Data Visualization, in a plethora of tools you might use in your career.
The launch of coinlayer, an API that integrates real-time crypto rates on websites and applications, has made it easier for developers to provide exchange rates for over 385 cryptocurrencies.
One of the most-watched markets globally is the cryptocurrency market, and the rising interest of investors has made it crucial to provide reliable cryptocurrency data in real-time.
Due to the extreme uncertainty found in most cryptocurrencies, the price would typically not be the same from day to day. You need an app, service, or platform that can provide rates every hour. Coinlayer offers this kind of service against several fiat currencies for over 385 different coins.
Developers, Blockchain experts, and crypto affiliates can add crypto rates to websites using the coinlayer API. Besides, coinlayer has opened the window of opportunity, being an API that any average person can use.
Introduction to coinlayer
coinlayer is the most trusted and authoritative resource for accurate crypto market data from more than 25 exchange rates. There are three main features in the coinlayer API, including performance, ease of use, and compatibility.
The use of the coinlayer includes:
An easy REST framework
Comprehensive and interactive API documentation
Integration guides
Real-time Cryptocurrency
JSONExchange Rates
Response time up to 20 milliseconds
The best part!
coinlayer is also free to use. However, the free version has certain limitations, and one has to pay the price to go fully featured. Nevertheless, the pricing for coinlayer is not as high as other crypto market data APIs that come with high monthly fees, no customer support, and a low monthly quota.
Check the pricing!
coinlayer Pricing
The platform offers different plans for different needs. The free plan undoubtedly is entirely free without any hidden charges. Additionally, as per your requirements, you can opt from the following available premium plans:
Basis: $9.99 per month/ $95.90 per year
Professional: $39.99 per month/ $383.90 per year
Professional Plus: $79.99 per month/ $767.90 per year
Enterprise: Contact Sales
coinlayer Features
Extensive Cryptocurrency Database
Collect up-to-date data on the cryptocurrency, accessed from 25+ markets, spontaneously on more than 385 coins.
Traditional Data
Examine how the API has been developed for historical data during 2011 by querying cryptocurrency data over time.
Robust JSON API
A solid and highly accessible cloud platform supports the coinlayer API, which delivers the data in milliseconds.
Authoritative Sources
Many reputable crypto-exchange providers influence crypto rates for coinlayer API to ensure maximum precision.
Dedicated Support
The team of experts at coinlayer takes customer support very sternly to help users understand from the basic to the advanced level in cryptocurrency.
Bank-Grade Security
The coinlayer API protects both request and response data transmission through the 256-bit HTTPS industry-standard encryption.
So, let’s get started with understanding how to add crypto rates to your website using coinlayer.
How to Add Crypto Rates to your Website using coinlayer?
coinlayer API
The API for coinlayer comprises a series of endpoints, functionalities, and options. coinlayer obtains crypto data from some of the biggest cryptocurrency exchanges that are requested using HTTP GET. The accuracy and reliability of the crypto data returned by the coinlayer API are the strongest owing to its sophisticated fallback algorithm.
coinlayer Quickstart Tool
The onboarding for beginners is made simple with a Quickstart tool, which displays all of the functionality of the API. If signed up, you can use their Quickstart Tool to evaluate every API endpoint with a tap.
Besides, it would help if you had a free API access key to start using the Quickstart tool.
Achievable Target Currencies
By default, the coinlayer API still converts cryptocurrency rates to American dollars. Using the target API to shift the objective currency to some other approved fiat currency code can be done by customers subscribing to the basic or higher plan.
A total of 166 world currency conversions are supported by the coinlayer API. Contrary to other related APIs where time differences could be required, coinlayer operates out of the box for every marked project.
Check out the below list of response objects for coinlayer API and their description:
Getting Started with coinlayer API
API Access Key
Your API access key is the only token for accessing the coinlayer API. By signing in to the dashboard, you must search your API access key.
Upon completion of your registration, a forever free API key will be issued. This is more than enough for training and should be sufficient enough for essential use.
Further, you get a window to validate the API after logging in. The database query will be running, and the data can be expanded through the endpoints for all available JSON cryptocurrencies. For convenience, the required query parameters are highlighted with orange and optional parameters with blue.
To authenticate the API, append the access_key parameter to the API’s base URL and set it to your API access key value.
The main configuration is the base URL (https://api.coinlayer.com), the endpoint (live, list, convert, etc.), and the API’s permission key. The response returns the question performance status, links to terms and privacy sites, unique endpoint features, destination currencies, and specific endpoint datasets. The coinlayer supports JSON callbacks so that the GET callback optional parameter will wrap the result within a function.
API Response
As you can see above, we have used the Live endpoint while appending the API access key. So, the basic API response for all available cryptocurrencies is shown below in JSON format with the exchange rate data:
Reviewing coinlayer API Endpoints
Overall, six API endpoints, each with different functionality, are available in the coinlayer API.
1. Live Data
This endpoint is used to query the API for the latest available exchange rate data.
Developers can use coinlayer for incorporating a feature for crypto-monetary trade into their websites or smartphone applications for a financial or digital money trading firm. Coinlayer is the optimal solution for all applicable cryptocurrencies in real-time, stable exchange rates.
Anyone who seeks to integrate live reference prices into their ventures must take the coinlayer API into account.
This is a set of features combined on a group of functions to help you work with JSON format, without had to write a bunch of classes and handles, perfect for people who work with integrated applications, such as web services host or client, or if your project requires NO SQL databases.
For now, it is compatible with Win32 and Win64 and from Delphi XE3 to Rio according to the official GitHub README
This package will work as Helpers for your known components, to use it you just need to add the folder on your library Path:
../dataset-serialize/src
and declare it on your Units
uses DataSet.Serialize;
Using it you can easily convert Datasets to JSON and back, also manipulates nested structures using Master/detail linked datasets.
Check out some examples of how simple is to use this powerful, free and open-source package.
Validate JSON
begin
LJSONArray := qrySamples.ValidateJSON('{"country":"Brazil"}');
end;
Load from JSON
begin
qrySamples.LoadFromJSON('{"firstName":"Vinicius Sanchez","country":"Brazil"}');
end;
Merge from JSON
begin
qrySamples.MergeFromJSONObject('{"firstName":"Vinicius","country":"United States"}');
end;
Last few years, there is a rise in open-source Delphi projects on GitHub. Many tech companies which utilize Delphi is publishing their code to GitHub to share their different solution for different problems in the software development business.
InstantObjects – this is the integrated framework for developing an object-oriented solution in Delphi. This framework enables the creation of applications based on persistent business objects. A business object is an object that has a set of properties and values, operations, and connections to other business objects. Business objects include business data and model business behavior.
InstantObject framework simplifies the process of realizing ideas into products, it shortens time-to-market and helps keep business focus. Since it is easy to integrate with the Delphi IDE, you can create business applications with it in no time.
InstantObjects offers:
Model realization in the Delphi IDE via integrated two-way tools.
Object persistence in the most common relational databases or flat XML-based files.
Object presentation via standard data-aware controls.
Serialization/Deserialization of object using delphi-neon library
When you install the framework from the GetIt Package Manager, you will get several demo applications for you to learn more about the use cases of the framework. And you can find more information about the framework on its wiki page.
program IOConsoleDemo;
{$APPTYPE CONSOLE}
uses
SysUtils,
Model in 'Model.pas',
InstantPersistence,
InstantXML;
{$R *.mdr} {Model}
var
ApplicationPath : string;
Connection : TXMLFilesAccessor;
Connector : TInstantXMLConnector;
SimpleClass : TSimpleClass;
Id : string;
i : integer;
begin
ApplicationPath := ExtractFilePath(ParamStr(0));
Try
//In every application the mdr file is normally included into the applications
//by InstantObjects.
//You can generate it at run-time calling, for example:
//CreateInstantModel;
//You read it from disk, for example:
//InstantModel.LoadFromFile(ApplicationPath+'MinimalModel.xml');
//Connect to database
Connection := nil;
Connector := nil;
Try
Connection := TXMLFilesAccessor.Create(nil);
Connection.RootFolder := ApplicationPath+'XMLStorage';
Connector := TInstantXMLConnector.Create(nil);
Connector.Connection := Connection;
Connector.LoginPrompt := False;
Connector.IsDefault := True;
WriteLn('Building Database structure');
Connector.BuildDatabase;
WriteLn('Connecting to Database.');
Connector.Connect;
for i := 0 to 100 do
begin
WriteLn('Storing Object.');
SimpleClass := TSimpleClass.Create;
Try
SimpleClass.StringProperty := IntToStr(Random(MaxInt));
SimpleClass.Store;
Id := SimpleClass.Id;
Finally
SimpleClass.Free;
End;
WriteLn('Retrieving and changing Object.');
SimpleClass := TSimpleClass.Retrieve(Id);
Try
SimpleClass.StringProperty := IntToStr(Random(MaxInt));
SimpleClass.Store;
Finally
SimpleClass.Free;
End;
(*
WriteLn('Retrieving and deleting Object.');
SimpleClass := TSimpleClass.Retrieve(Id);
Try
SimpleClass.Dispose;
Finally
SimpleClass.Free;
End;
*)
end;
WriteLn('Disconnecting from Database.');
Connector.Disconnect;
Finally
Connector.Free;
Connection.Free;
End;
WriteLn('Done!');
Except
on E: Exception do WriteLn(E.Message);
End;
end.
In the March 2021 edition of the Communications of the ACM there is an article, by Nicklaus Wirth, about the 50th anniversary of Nicklaus Wirth’s Pascal. What started at ETH Zurich (in 1970) was publicized in the article “The programming language pascal” by Wirth in the Acta Informatica Journal in March of 1971.
I am very happy to be just a small part of the 50 years of Pascal as a student (I wrote my first Pascal program at Cal Poly San Luis Obispo California in 1972 on a CDC timesharing system), software engineer, Borland employee and member of the Embarcadero developer community.
While I do miss travelling and visiting with developers, I am happy to still be able to program, write and create videos about Delphi and C++Builder as a semi-retired developer here in Ashland Oregon.
To remind me of the importance of Turbo Pascal and Delphi, I have (on the wall in my home office) a framed blow up of the original Turbo Pascal version 1 ad that appeared the November 1983 edition of Byte Magazine.
It’s so cool that Delphi can still (with very little change) compile and execute Turbo Pascal programs and most Pascal programs from the original Karen Jensen and Nicklaus Wirth “PASCAL User Manual and Report” from 1975.
Microsoft considers Zero Trust an essential component of any organization’s security plan. We have partnered with Cloud Security Alliance, a not-for-profit organization that promotes cloud computing best practices, to bring together executive security leaders to discuss and share insights about their Zero Trust journeys.
In our first discussion, we sat down with 10 executive security leaders from prominent energy, finance, insurance, and manufacturing companies in a virtual roundtable, to understand what has worked and discover where they needed to adjust their Zero Trust security model. Our collective goal was to learn from one another and then share what we’ve learned with other organizations. Discussions like these give us valuable opportunities to grow and led us to publish an eBook to share those conversations with other cybersecurity professionals.
Today, we are publishing the “Examining Zero Trust: An executive roundtable discussion” eBook as a result of those conversations. The eBook describes how the Zero Trust security model involves thinking beyond perimeter security and moving to a more holistic security approach. The eBook complements other resources we have published to help organizations expedite their journeys in this critical area, such as the Microsoft Zero Trust Maturity Model and adoption guidance in the Zero Trust Deployment Center. Zero Trust assumes breach and verifies each request as if it originates from an uncontrolled network. If Zero Trust had a motto, it would be: never trust, always verify. That means never trusting anyone or anything—inside or outside the firewall, on the endpoint, on the server, or in the cloud.
Zero Trust strategies
Introducing Zero Trust into your organization requires implementing controls and technologies across all foundational elements: identities, devices, applications, data, infrastructure, and networks. Roundtable participants offered successful Zero Trust strategies that respect the value of each of these foundational elements.
Strategy #1 – Use identities to control access
Identities—representing people, services, and IoT devices—are the common denominator across networks, endpoints, and applications. In a Zero Trust security model, they function as a powerful, flexible, and granular way to control access to data. Or, as one participant explained it, “The new perimeter is identity, and you need a strong identity that is validated.”
When any identity attempts to access any resource, security controls should verify the identity with strong authentication, ensure access is compliant and typical for that identity, and confirm that the identity follows least privilege access principles.
Strategy #2 – Elevate authentication
Incorporating multifactor authentication or continuous authentication into your identity management strategy can substantially improve your organization’s information security posture. One roundtable participant shared that by extending identity management with continuous authentication capabilities, their organization can now validate identity when a user’s IP address or routine behavior pattern changes.
“Zero Trust will only work if it is transparent to the end-user,” said a participant. “You have to make it easy and transparent. If you want to authenticate every five minutes or every second, that’s fine, as long as the end-user doesn’t have to do anything—as long as you can validate through other methods. For example, the endpoint can be one of the factors for multifactor authentication.”
Passwordless authentication replaces the traditional password with two or more verification factors secured with a cryptographic key pair. When registered, the device creates a public and private key. The private key can be unlocked using a local gesture, such as a PIN or biometric authentication (fingerprint scan, facial recognition, or iris recognition).
Strategy #4 – Segment your corporate network
Network segmentation can be a pain point for business IT because firewalls represent early segmentation, and this can complicate development and testing. Ultimately, the IT team relies more on security teams to fix networking connectivity and access issues.
However, segmenting networks and conducting deeper in-network micro-segmentation is important for Zero Trust because in a mobile- and cloud-first world, all business-critical data is accessed over network infrastructure. Networking controls provide critical functionality to enhance visibility and help prevent attackers from moving laterally across the network.
Strategy #5 – Secure your devices
With the Zero Trust model, the same security policies are applied whether the device is corporately owned or a personally owned phone or tablet, also called a “bring your own device” (BYOD). Corporate, contractor, partner, and guest devices are treated the same whether the device is fully managed by IT or only the apps and data are secured. And this is true whether these endpoints—PC, Mac, smartphone, tablet, wearable, or IoT device—are connected using the secure corporate network, home broadband, or public internet.
“In a BYOD world, the device is the explosive piece,” said one participant. “If you allow unpatched devices to connect to your network, it is, in essence, walking into your base with live ordinance, and it can go bad quickly. Why wouldn’t you test outside to begin with?”
Strategy #6 – Segment your applications
Benefitting fully from cloud apps and services requires finding the right balance between providing access and maintaining control to ensure that apps, and the data they contain, are protected. Apply controls and technologies to discover shadow IT, ensure appropriate in-app permissions, gate access based on real-time analytics, monitor for abnormal behavior, restrict user actions, and validate secure configuration options.
“It is becoming easier and more achievable to have segmentation between the applications,” said a participant. “Being able to provide excessive privileges/role-based access is becoming part of the policy engine. The application piece of the puzzle seems to be solving itself more intelligently as time goes on. This approach gets validated every time I hear an end-user is able to dial in on the problem.”
Strategy #7 – Define roles and access controls
With the rapid rise in remote work, organizations must consider alternative ways of achieving modern security controls. It’s useful to operationalize roles and tie them to a policy as part of authorization, single sign-on, passwordless access, and segmentation. However, each role defined must be managed now and, in the future, so be selective about how many roles you create so there aren’t management challenges later.
“If you create a thousand roles in your organization to be that granular, you will have problems with management down the road,” said a participant. “You’re going to end up with massive amounts of accounts that are not updated, and that’s where you have breaches.”
The journey toward Zero Trust
The foundational focus of organizations varies as they start their Zero Trust journey. Some of the organizations represented by roundtable participants began their Zero Trust journey with user identity and access management, while others started with network macro- and micro-segmentations or application sides. These leaders agreed that developing a holistic strategy to address Zero Trust is critical and that you should start small and build confidence before rolling out Zero Trust across your organization.
That usually means taking a phased approach that targets specific areas based on the organization’s Zero Trust maturity, available resources, and priorities. For example, you could start with a new greenfield project in the cloud or experiment in a developer and test environment. Once you’ve built confidence, we recommend extending the Zero Trust model throughout the entire digital estate, while embracing it as an integrated security philosophy and end-to-end strategy moving forward. You’re not alone in this journey. Successful organizations have walked this path, and Microsoft is happy to be with you every step of the way.
To learn more about Microsoft Security solutions visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.
Matplotlib is one of the most widely used data visualization libraries in Python. From simple to complex visualizations, it's the go-to library for most.
In this tutorial, we'll take a look at how to plot multiple line plots in Matplotlib - on the same Axes or Figure.
If you'd like to read more about plotting line plots in general, as well as customizing them, make sure to read our guide on Plotting Lines Plots with Matplotlib.
Plot Multiple Line Plots in Matplotlib
Depending on the style you're using, OOP or MATLAB-style, you'll either use the plt instance, or the ax instance to plot, with the same approach.
To plot multiple line plots in Matplotlib, you simply repeatedly call the plot() function, which will apply the changes to the same Figure object:
Without setting any customization flags, the default colormap will apply, drawing both line plots on the same Figure object, and adjusting the color to differentiate between them:
Now, let's generate some random sequences using Numpy, and customize the line plots a tiny bit by setting a specific color for each, and labeling them:
import matplotlib.pyplot as plt
import numpy as np
line_1 = np.random.randint(low = 0, high = 50, size = 50)
line_2 = np.random.randint(low = -15, high = 100, size = 50)
fig, ax = plt.subplots()
ax.plot(line_1, color = 'green', label = 'Line 1')
ax.plot(line_2, color = 'red', label = 'Line 2')
ax.legend(loc = 'upper left')
plt.show()
We don't have to supply the X-axis values to a line plot, in which case, the values from 0..n will be applied, where n is the last element in the data you're plotting. In our case, we've got two sequences of data - line_1 and line_2, which will both be plotted on the same X-axis.
While plotting, we've assigned colors to them, using the color argument, and labels for the legend, using the label argument. This results in:
Plot Multiple Line Plots with Different Scales
Sometimes, you might have two datasets, fit for line plots, but their values are significantly different, making it hard to compare both lines. For example, if line_1 had an exponentially increasing sequence of numbers, while line_2 had a linearly increasing sequence - surely and quickly enough, line_1 would have values so much larger than line_2, that the latter fades out of view.
Let's use Numpy to make an exponentially increasing sequence of numbers, and plot it next to another line on the same Axes, linearly:
The exponential growth in the exponential_sequence goes out of proportion very fast, and it looks like there's absolutely no difference in the linear_sequence, since it's so minuscule relative to the exponential trend of the other sequence.
Now, let's plot the exponential_sequence on a logarithmic scale, which will produce a visually straight line, since the Y-scale will exponentially increase. If we plot it on a logarithmic scale, and the linear_sequence just increases by the same constant, we'll have two overlapping lines and we will only be able to see the one plotted after the first.
Let's change up the linear_sequence a bit to make it observable once we plot both:
import matplotlib.pyplot as plt
import numpy as np
# Sequences
linear_sequence = [1, 2, 3, 4, 5, 6, 7, 10, 15, 20]
exponential_sequence = np.exp(np.linspace(0, 10, 10))
fig, ax = plt.subplots()
# Plot linear sequence, and set tick labels to the same color
ax.plot(linear_sequence, color='red')
ax.tick_params(axis='y', labelcolor='red')
# Generate a new Axes instance, on the twin-X axes (same position)
ax2 = ax.twinx()
# Plot exponential sequence, set scale to logarithmic and change tick color
ax2.plot(exponential_sequence, color='green')
ax2.set_yscale('log')
ax2.tick_params(axis='y', labelcolor='green')
plt.show()
This time around, we'll have to use the OOP interface, since we're creating a new Axes instance. One Axes has one scale, so we create a new one, in the same position as the first one, and set its scale to a logarithmic one, and plot the exponential sequence.
This results in:
We've also changed the tick label colors to match the color of the line plots themselves, otherwise, it'd be hard to distinguish which line is on which scale.
Plot Multiple Line Plots with Multiple Y-Axis
Finally, we can apply the same scale (linear, logarithmic, etc), but have different values on the Y-axis of each line plot. This is achieved through having multiple Y-axis, on different Axes objects, in the same position.
For example, the linear_sequence won't go above 20 on the Y-axis, while the exponential_sequence will go up to 20000. We can plot them both linearly, simply by plotting them on different Axes objects, in the same position, each of which set the Y-axis ticks automatically to accommodate for the data we're feeding in:
import matplotlib.pyplot as plt
import numpy as np
# Sequences
linear_sequence = [1, 2, 3, 4, 5, 6, 7, 10, 15, 20]
exponential_sequence = np.exp(np.linspace(0, 10, 10))
fig, ax = plt.subplots()
# Plot linear sequence, and set tick labels to the same color
ax.plot(linear_sequence, color='red')
ax.tick_params(axis='y', labelcolor='red')
# Generate a new Axes instance, on the twin-X axes (same position)
ax2 = ax.twinx()
# Plot exponential sequence, set scale to logarithmic and change tick color
ax2.plot(exponential_sequence, color='green')
ax2.tick_params(axis='y', labelcolor='green')
plt.show()
We've again, created another Axes in the same position as the first one, so we can plot on the same place in the Figure but different Axes objects, which allows us to set values for each Y-axis individually.
Without setting the Y-scale to logarithmic this time, both will be plotted linearly:
Conclusion
In this tutorial, we've gone over how to plot multiple Line Plots on the same Figure or Axes in Matplotlib and Python. We've covered how to plot on the same Axes with the same scale and Y-axis, as well as how to plot on the same Figure with different and identical Y-axis scales.
If you're interested in Data Visualization and don't know where to start, make sure to check out our bundle of books on Data Visualization in Python:
Data Visualization in Python with Matplotlib and Pandas is a book designed to take absolute beginners to Pandas and Matplotlib, with basic Python knowledge, and allow them to build a strong foundation for advanced work with theses libraries - from simple plots to animated 3D plots with interactive buttons.
It serves as an in-depth, guide that'll teach you everything you need to know about Pandas and Matplotlib, including how to construct plot types that aren't built into the library itself.
Data Visualization in Python, a book for beginner to intermediate Python developers, guides you through simple data manipulation with Pandas, cover core plotting libraries like Matplotlib and Seaborn, and show you how to take advantage of declarative and experimental libraries like Altair. More specifically, over the span of 11 chapters this book covers 9 Python libraries: Pandas, Matplotlib, Seaborn, Bokeh, Altair, Plotly, GGPlot, GeoPandas, and VisPy.
It serves as a unique, practical guide to Data Visualization, in a plethora of tools you might use in your career.
「WEAPは、計画立案とポリシー分析のための包括的かつ柔軟なユーザーフレンドリーなフレームワークを提供します。水資源の専門家の多くが、WEAPが提供するモデル/データベース/スプレッドシートなどさまざまなソフトウェアツールボックスの利便性を評価しています。淡水管理の課題に対する認識は日々高まっており、限られた水資源を農業、地方自治体、環境用途に割り当てるには、需要と供給、水質管理、生態系への配慮を完全に統合して管理する必要があります。WEAP(Water Evaluation and Planning System)は、これらの課題に対し、統合された水資源計画のための実用的かつ堅牢なツールとして提供することをゴールとしています。」
When did you start using RAD Studio/Delphi and have long have you been using it?
I started with Delphi in 1996. Soon after I wrote and marketed word processor components (WPTools). These components are still my flagship products even today. At the time, Delphi was a ground-breaking developing environment, which allowed you to write native Windows programs that were very fast and had little overhead. Delphi is a compiler language and yet is just as easy to use as an interpreter language. Nowadays these differences don’t exist anymore, since we have just-in-time compilers, but back then this was very important and extremely helpful.
What was it like building software before you had RAD Studio/Delphi?
I was passionate about programming in C and later in C++ and had a successful database program on the market. However, at the time, the possibilities for debugging a program were very limited and it was often the case that the programmer ended up searching for errors longer than they did programming. Often, you also had to deal with the MFC and C++.
How did RAD Studio/Delphi help you create your showcase application?
Delphi provides the FireMonkey graphics library, which is ideally suited to program a presentation program like fotoARRAY. On the other hand, that is because it allows a scalable user interface, on the other hand, because it is extremely fast in displaying images and animation. And then you can also translate your programs for MacOS.
What made RAD Studio/Delphi stand out from other options?
Delphi is so practical because you can program for other platforms using a single project file. On the other hand, the run-time is relatively small and does not need to be installed in the system. This allows Delphi programs to run with very low maintenance. They can also be easily started from a storage medium without installation to the system. I could also envision a version of fotoARRAY or my word processor WPTools on Android, running on a tablet computer.
What made you happiest about working with RAD Studio/Delphi?
For me one of the best Delphi features is the complier speed. You never wait more than 2-3 seconds for the program to start. I also like the underlying language, Pascal, which is not so prone to errors as C++. Another advantage that I have come to appreciate is the FireMonkey. With FireMonkey amazingly complex user interfaces are possible without using many components. This is because you can nest components and sub-windows inside each other.
What have you been able to achieve through using RAD Studio/Delphi to create your showcase application?
I was able to fulfill a dream of mine within a relatively short period of time. This dream, which I have cherished since the introduction of digital photography, was to program my own photo management. In the meantime, the program even includes its own set of unique, yet powerful tools for photo editing.
What are some future plans for your showcase application?
I have plans to incorporate my word processing technology (WPTools) to make it possible to create documents, which not only include text but also numerous images. Using regular word processors on the desktop, such as documents often cause an exponential increase in memory consumption. With my technology, I can avoid this problem. Furthermore, I want to use my PDF technology (wPDF) to create PDF files from image libraries (contact sheets) and from the aforementioned photo stories. The embedded photo editing tools should allow the user to quickly edit RAW formats – particularly if no specialized program is available. Or also to process .jpg files without altering the originals. The main focus of the editing is to adjust the colors and exposure of the image, whereas other steps, such as compositions, can be done with external programs. Ultimately, it is more important to me to create a good interface with external RAW and image processing tools than integrated processing – there are some very good tools available on the market and the user should not be locked in with one tool. On the whole, I would like to reach a wider audience with fotoARRAY. I also plan to incorporate new technologies into fotoARRAY but will take care that it remains a fast tool for image browsing.
Thank you, Julian! You may click the link below to view his showcase entry.
Most GPS modules have a serial port, which makes them excellent to connect to a microcontroller or computer. It is common for the microcontroller to parse the NMEA data. Parsing is just removing the pieces of data from the NMEA sentence so the microcontroller can do something useful with the data.
NMEA is an acronym for the National Marine Electronics Association. Why do you need to parse NMEA data? Because, when you get NMEA output, you get something like this:
$GPGGA,181908.00,3404.7041778,N,07044.3966270,
W,4,13,1.00,495.144,M,29.200,M,0.10,0000*40
But you only need to get north latitude or west longitude. For this, you need a parse for the NMEA format.
The NemaTode parse is yet another lightweight generic NMEA parser. It also comes with a GPS data interface to handle the most popular GPS NMEA sentences.
This is all you need to use the GPS NMEA sentence data.
NMEAParser parser;
GPSService gps(parser);
// (optional) Called when a sentence is valid syntax
parser.onSentence += [](const NMEASentence& nmea){
cout << "Received $" << nmea.name << endl;
};
// (optional) Called when data is read/changed
gps.onUpdate += [](GPSService& gps){
// There are *tons* of GPSFix properties
if( gps.fix.locked() ){
cout << " # Position: " << gps.fix.latitude << ", " << gps.fix.longitude << endl;
} else {
cout << " # Searching..." << endl;
}
};
// Send in a log file or a byte stream
try {
parser.readLine("FILL WITH A NMEA MESSAGE");
} catch (NMEAParseError&) {
// Syntax error, skip
}
Features:
NMEA Parsing of standard and custom sentences
NMEA Generation of “standard” and custom sentences.
GPS Fix class to manage and organize all the GPS-related data.
It can be challenging to develop a neural network predictive model for a new dataset.
One approach is to first inspect the dataset and develop ideas for what models might work, then explore the learning dynamics of simple models on the dataset, then finally develop and tune a model for the dataset with a robust test harness.
This process can be used to develop effective neural network models for classification and regression predictive modeling problems.
In this tutorial, you will discover how to develop a Multilayer Perceptron neural network model for the Wood’s Mammography classification dataset.
After completing this tutorial, you will know:
How to load and summarize the Wood’s Mammography dataset and use the results to suggest data preparations and model configurations to use.
How to explore the learning dynamics of simple MLP models on the dataset.
How to develop robust estimates of model performance, tune model performance and make predictions on new data.
Let’s get started.
Develop a Neural Network for Woods Mammography Dataset Photo by Larry W. Lo, some rights reserved.
Tutorial Overview
This tutorial is divided into 4 parts; they are:
Woods Mammography Dataset
Neural Network Learning Dynamics
Robust Model Evaluation
Final Model and Make Predictions
Woods Mammography Dataset
The first step is to define and explore the dataset.
We will be working with the “mammography” standard binary classification dataset, sometimes called “Woods Mammography“.
The focus of the problem is on detecting breast cancer from radiological scans, specifically the presence of clusters of microcalcifications that appear bright on a mammogram.
There are two classes and the goal is to distinguish between microcalcifications and non-microcalcifications using the features for a given segmented object.
Non-microcalcifications: negative case, or majority class.
Microcalcifications: positive case, or minority class.
The Mammography dataset is a widely used standard machine learning dataset, used to explore and demonstrate many techniques designed specifically for imbalanced classification.
Note: To be crystal clear, we are NOT “solving breast cancer“. We are exploring a standard classification dataset.
Below is a sample of the first 5 rows of the dataset
We can load the dataset as a pandas DataFrame directly from the URL; for example:
# load the mammography dataset and summarize the shape
from pandas import read_csv
# define the location of the dataset
url = 'https://raw.githubusercontent.com/jbrownlee/Datasets/master/mammography.csv'
# load the dataset
df = read_csv(url, header=None)
# summarize shape
print(df.shape)
Running the example loads the dataset directly from the URL and reports the shape of the dataset.
In this case, we can confirm that the dataset has 7 variables (6 input and one output) and that the dataset has 11,183 rows of data.
This a modest sized dataset for a neural network and suggests that a small network would be appropriate.
It also suggests that using k-fold cross-validation would be a good idea given that it will give a more reliable estimate of model performance than a train/test split and because a single model will fit in seconds instead of hours or days with the largest datasets.
(11183, 7)
Next, we can learn more about the dataset by looking at summary statistics and a plot of the data.
# show summary statistics and plots of the mammography dataset
from pandas import read_csv
from matplotlib import pyplot
# define the location of the dataset
url = 'https://raw.githubusercontent.com/jbrownlee/Datasets/master/mammography.csv'
# load the dataset
df = read_csv(url, header=None)
# show summary statistics
print(df.describe())
# plot histograms
df.hist()
pyplot.show()
Running the example first loads the data before and then prints summary statistics for each variable.
We can see that the values are generally small with means close to zero.
A histogram plot is then created for each variable.
We can see that perhaps most variables have an exponential distribution, and perhaps variable 5 (the last input variable) is Gaussian with outliers/missing values.
We may have some benefit in using a power transform on each variable in order to make the probability distribution less skewed which will likely improve model performance.
Histograms of the Mammography Classification Dataset
It may be helpful to know how imbalanced the dataset actually is.
We can use the Counter object to count the number of examples in each class, then use those counts to summarize the distribution.
The complete example is listed below.
# summarize the class ratio of the mammography dataset
from pandas import read_csv
from collections import Counter
# define the location of the dataset
url = 'https://raw.githubusercontent.com/jbrownlee/Datasets/master/mammography.csv'
# load the csv file as a data frame
dataframe = read_csv(url, header=None)
# summarize the class distribution
target = dataframe.values[:,-1]
counter = Counter(target)
for k,v in counter.items():
per = v / len(target) * 100
print('Class=%s, Count=%d, Percentage=%.3f%%' % (k, v, per))
Running the example summarizes the class distribution, confirming the severe class imbalanced with approximately 98 percent for the majority class (no cancer) and approximately 2 percent for the minority class (cancer).
This is helpful because if we use classification accuracy, then any model that achieves an accuracy less than about 97.7% does not have skill on this dataset.
Now that we are familiar with the dataset, let’s explore how we might develop a neural network model.
Neural Network Learning Dynamics
We will develop a Multilayer Perceptron (MLP) model for the dataset using TensorFlow.
We cannot know what model architecture of learning hyperparameters would be good or best for this dataset, so we must experiment and discover what works well.
Given that the dataset is small, a small batch size is probably a good idea, e.g. 16 or 32 rows. Using the Adam version of stochastic gradient descent is a good idea when getting started as it will automatically adapt the learning rate and works well on most datasets.
Before we evaluate models in earnest, it is a good idea to review the learning dynamics and tune the model architecture and learning configuration until we have stable learning dynamics, then look at getting the most out of the model.
We can do this by using a simple train/test split of the data and review plots of the learning curves. This will help us see if we are over-learning or under-learning; then we can adapt the configuration accordingly.
First, we must ensure all input variables are floating-point values and encode the target label as integer values 0 and 1.
...
# ensure all data are floating point values
X = X.astype('float32')
# encode strings to integer
y = LabelEncoder().fit_transform(y)
Next, we can split the dataset into input and output variables, then into 67/33 train and test sets.
We must ensure that the split is stratified by the class ensuring that the train and test sets have the same distribution of class labels as the main dataset.
...
# split into input and output columns
X, y = df.values[:, :-1], df.values[:, -1]
# split into train and test datasets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5, stratify=y, random_state=1)
We can define a minimal MLP model.
In this case, we will use one hidden layer with 50 nodes and one output layer (chosen arbitrarily). We will use the ReLU activation function in the hidden layer and the “he_normal” weight initialization, as together, they are a good practice.
...
# define model
model = Sequential()
model.add(Dense(50, activation='relu', kernel_initializer='he_normal', input_shape=(n_features,)))
model.add(Dense(1, activation='sigmoid'))
# compile the model
model.compile(optimizer='adam', loss='binary_crossentropy')
We will fit the model for 300 training epochs (chosen arbitrarily) with a batch size of 32 because it is a modestly sized dataset.
We are fitting the model on raw data, which we think might be a good idea, but it is an important starting point.
...
history = model.fit(X_train, y_train, epochs=300, batch_size=32, verbose=0, validation_data=(X_test,y_test))
At the end of training, we will evaluate the model’s performance on the test dataset and report performance as the classification accuracy.
...
# predict test set
yhat = model.predict_classes(X_test)
# evaluate predictions
score = accuracy_score(y_test, yhat)
print('Accuracy: %.3f' % score)
Finally, we will plot learning curves of the cross-entropy loss on the train and test sets during training.
Tying this all together, the complete example of evaluating our first MLP on the cancer survival dataset is listed below.
# fit a simple mlp model on the mammography and review learning curves
from pandas import read_csv
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
from sklearn.metrics import accuracy_score
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense
from matplotlib import pyplot
# load the dataset
path = 'https://raw.githubusercontent.com/jbrownlee/Datasets/master/mammography.csv'
df = read_csv(path, header=None)
# split into input and output columns
X, y = df.values[:, :-1], df.values[:, -1]
# ensure all data are floating point values
X = X.astype('float32')
# encode strings to integer
y = LabelEncoder().fit_transform(y)
# split into train and test datasets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5, stratify=y, random_state=1)
# determine the number of input features
n_features = X.shape[1]
# define model
model = Sequential()
model.add(Dense(50, activation='relu', kernel_initializer='he_normal', input_shape=(n_features,)))
model.add(Dense(1, activation='sigmoid'))
# compile the model
model.compile(optimizer='adam', loss='binary_crossentropy')
# fit the model
history = model.fit(X_train, y_train, epochs=300, batch_size=32, verbose=0, validation_data=(X_test,y_test))
# predict test set
yhat = model.predict_classes(X_test)
# evaluate predictions
score = accuracy_score(y_test, yhat)
print('Accuracy: %.3f' % score)
# plot learning curves
pyplot.title('Learning Curves')
pyplot.xlabel('Epoch')
pyplot.ylabel('Cross Entropy')
pyplot.plot(history.history['loss'], label='train')
pyplot.plot(history.history['val_loss'], label='val')
pyplot.legend()
pyplot.show()
Running the example first fits the model on the training dataset, then reports the classification accuracy on the test dataset.
Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.
In this case we can see that the model performs better than a no-skill model, given that the accuracy is above about 97.7 percent, in this case achieving an accuracy of about 98.8 percent.
Accuracy: 0.988
Line plots of the loss on the train and test sets are then created.
We can see that the model quickly finds a good fit on the dataset and does not appear to be over or underfitting.
Learning Curves of Simple Multilayer Perceptron on the Mammography Dataset
Now that we have some idea of the learning dynamics for a simple MLP model on the dataset, we can look at developing a more robust evaluation of model performance on the dataset.
Robust Model Evaluation
The k-fold cross-validation procedure can provide a more reliable estimate of MLP performance, although it can be very slow.
This is because k models must be fit and evaluated. This is not a problem when the dataset size is small, such as the cancer survival dataset.
We can use the StratifiedKFold class and enumerate each fold manually, fit the model, evaluate it, and then report the mean of the evaluation scores at the end of the procedure.
...
# prepare cross validation
kfold = KFold(10)
# enumerate splits
scores = list()
for train_ix, test_ix in kfold.split(X, y):
# fit and evaluate the model...
...
...
# summarize all scores
print('Mean Accuracy: %.3f (%.3f)' % (mean(scores), std(scores)))
We can use this framework to develop a reliable estimate of MLP model performance with our base configuration, and even with a range of different data preparations, model architectures, and learning configurations.
It is important that we first developed an understanding of the learning dynamics of the model on the dataset in the previous section before using k-fold cross-validation to estimate the performance. If we started to tune the model directly, we might get good results, but if not, we might have no idea of why, e.g. that the model was over or under fitting.
If we make large changes to the model again, it is a good idea to go back and confirm that the model is converging appropriately.
The complete example of this framework to evaluate the base MLP model from the previous section is listed below.
# k-fold cross-validation of base model for the mammography dataset
from numpy import mean
from numpy import std
from pandas import read_csv
from sklearn.model_selection import StratifiedKFold
from sklearn.preprocessing import LabelEncoder
from sklearn.metrics import accuracy_score
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense
from matplotlib import pyplot
# load the dataset
path = 'https://raw.githubusercontent.com/jbrownlee/Datasets/master/mammography.csv'
df = read_csv(path, header=None)
# split into input and output columns
X, y = df.values[:, :-1], df.values[:, -1]
# ensure all data are floating point values
X = X.astype('float32')
# encode strings to integer
y = LabelEncoder().fit_transform(y)
# prepare cross validation
kfold = StratifiedKFold(10, random_state=1)
# enumerate splits
scores = list()
for train_ix, test_ix in kfold.split(X, y):
# split data
X_train, X_test, y_train, y_test = X[train_ix], X[test_ix], y[train_ix], y[test_ix]
# determine the number of input features
n_features = X.shape[1]
# define model
model = Sequential()
model.add(Dense(50, activation='relu', kernel_initializer='he_normal', input_shape=(n_features,)))
model.add(Dense(1, activation='sigmoid'))
# compile the model
model.compile(optimizer='adam', loss='binary_crossentropy')
# fit the model
model.fit(X_train, y_train, epochs=300, batch_size=32, verbose=0)
# predict test set
yhat = model.predict_classes(X_test)
# evaluate predictions
score = accuracy_score(y_test, yhat)
print('>%.3f' % score)
scores.append(score)
# summarize all scores
print('Mean Accuracy: %.3f (%.3f)' % (mean(scores), std(scores)))
Running the example reports the model performance each iteration of the evaluation procedure and reports the mean and standard deviation of classification accuracy at the end of the run.
Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.
In this case, we can see that the MLP model achieved a mean accuracy of about 98.7 percent, which is pretty close to our rough estimate in the previous section.
This confirms our expectation that the base model configuration may work better than a naive model for this dataset
Next, let’s look at how we might fit a final model and use it to make predictions.
Final Model and Make Predictions
Once we choose a model configuration, we can train a final model on all available data and use it to make predictions on new data.
In this case, we will use the model with dropout and a small batch size as our final model.
We can prepare the data and fit the model as before, although on the entire dataset instead of a training subset of the dataset.
...
# split into input and output columns
X, y = df.values[:, :-1], df.values[:, -1]
# ensure all data are floating point values
X = X.astype('float32')
# encode strings to integer
le = LabelEncoder()
y = le.fit_transform(y)
# determine the number of input features
n_features = X.shape[1]
# define model
model = Sequential()
model.add(Dense(50, activation='relu', kernel_initializer='he_normal', input_shape=(n_features,)))
model.add(Dense(1, activation='sigmoid'))
# compile the model
model.compile(optimizer='adam', loss='binary_crossentropy')
We can then use this model to make predictions on new data.
First, we can define a row of new data.
...
# define a row of new data
row = [0.23001961,5.0725783,-0.27606055,0.83244412,-0.37786573,0.4803223]
Note: I took this row from the first row of the dataset and the expected label is a ‘-1’.
We can then make a prediction.
...
# make prediction
yhat = model.predict_classes([row])
Then invert the transform on the prediction, so we can use or interpret the result in the correct label (which is just an integer for this dataset).
...
# invert transform to get label for class
yhat = le.inverse_transform(yhat)
And in this case, we will simply report the prediction.
Tying this all together, the complete example of fitting a final model for the mammography dataset and using it to make a prediction on new data is listed below.
# fit a final model and make predictions on new data for the mammography dataset
from pandas import read_csv
from sklearn.preprocessing import LabelEncoder
from sklearn.metrics import accuracy_score
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Dropout
# load the dataset
path = 'https://raw.githubusercontent.com/jbrownlee/Datasets/master/mammography.csv'
df = read_csv(path, header=None)
# split into input and output columns
X, y = df.values[:, :-1], df.values[:, -1]
# ensure all data are floating point values
X = X.astype('float32')
# encode strings to integer
le = LabelEncoder()
y = le.fit_transform(y)
# determine the number of input features
n_features = X.shape[1]
# define model
model = Sequential()
model.add(Dense(50, activation='relu', kernel_initializer='he_normal', input_shape=(n_features,)))
model.add(Dense(1, activation='sigmoid'))
# compile the model
model.compile(optimizer='adam', loss='binary_crossentropy')
# fit the model
model.fit(X, y, epochs=300, batch_size=32, verbose=0)
# define a row of new data
row = [0.23001961,5.0725783,-0.27606055,0.83244412,-0.37786573,0.4803223]
# make prediction
yhat = model.predict_classes([row])
# invert transform to get label for class
yhat = le.inverse_transform(yhat)
# report prediction
print('Predicted: %s' % (yhat[0]))
Running the example fits the model on the entire dataset and makes a prediction for a single row of new data.
Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.
In this case, we can see that the model predicted a “-1” label for the input row.
Predicted: '-1'
Further Reading
This section provides more resources on the topic if you are looking to go deeper.
Cybersecurity threats are always evolving, and today we’re seeing a new wave of advanced attacks targeting areas of computing that don’t have the protection of the cloud. New data shows that firmware attacks are on the rise, and businesses aren’t paying close enough attention to securing this critical layer.
Recently, Microsoft commissioned a study that showed how attacks against firmware are outpacing investments targeted at stopping them. The March 2021 Security Signals report showed that more than 80% of enterprises have experienced at least one firmware attack in the past two years, but only 29% of security budgets are allocated to protect firmware.
Security Signals is a comprehensive research report assembled from interviews with 1,000 enterprise security decision makers (SDMs) from various industries across the U.S., UK, Germany, China, and Japan. Microsoft commissioned Hypothesis Group, an insights, design, and strategy agency, to execute the research.
The study showed that current investment is going to security updates, vulnerability scanning, and advanced threat protection solutions. Yet despite this, many organizations are concerned about malware accessing their system as well as the difficulty in detecting threats, suggesting that firmware is more difficult to monitor and control. Firmware vulnerabilities are also exacerbated by a lack of awareness and a lack of automation.
But the tide may be starting to turn against firmware exploits. There is a growing awareness of the issue worldwide, a new willingness to invest in protections, and an emerging class of secured-core hardware is showing the potential to empower organizations with chip-level security and new automation and analytics capabilities.
Firmware provides fertile ground to plant malicious code
Firmware, which lives below the operating system, is emerging as a primary target because it is where sensitive information like credentials and encryption keys are stored in memory. Many devices in the market today don’t offer visibility into that layer to ensure that attackers haven’t compromised a device prior to the boot process or at runtime bellow the kernel. And attackers have noticed.
If that’s not enough, the National Institute of Standards and Technology’s (NIST) National Vulnerability Database (NVD) has shown more than a five-fold increase in attacks against firmware in the last four years, and attackers have used this time to further refine their techniques and get ahead of software-only protections.
Yet the Security Signals study shows that awareness of this threat is lagging across industries. Even with this onslaught of firmware attacks, the study shows that SDMs believe software is three times as likely to pose a security threat versus firmware.
“There are two types of companies – those who have experienced a firmware attack, and those who have experienced a firmware attack but don’t know it.” – Azim Shafqat, Partner at ISG and Former Managing VP at Gartner
The OS Kernel is an emerging gap in defense
A look at respondents’ investments bears out this disparity. Hardware-based security features such as Kernel data protection (KDP), or memory encryption, which blocks malware or malicious threat actors from corrupting the operating system’s kernel memory or from reading it at runtime, is a leading indicator of preparedness against sophisticated kernel-level attacks. Security Signals found that only 36% of businesses invest in hardware-based memory encryption and less than half (46%) are investing in hardware-based kernel protections.
Security Signals also found that security teams are too focused on outdated “protect and detect” models of security and are not spending enough time on strategic work — only 39% of security teams’ time is spent on prevention and they don’t see that changing in the next two years. The lack of proactive defense investment in kernel attack vectors is an example of this outdated model.
Physical attacks using hardware
In addition to firmware attacks, respondents identified concerns with attack vectors exposed by hardware. The recent ThunderSpy attack targeted Thunderbolt ports, leveraging direct memory access (DMA) functionality to compromise devices via hardware access to the Thunderbolt controller. Another flaw, this one unpatchable, was found in the T2 security chip used in many common consumer devices. Other major firmware attacks in the last year included the RobbinHood, Uburos, Derusbi, Sauron and GrayFish attacks that exploited driver vulnerabilities.
Lack of automation and investment leads to a gap in focus on firmware
Part of the disconnect may be due to security teams being stuck in reactive cycles and manual processes. The vast majority (82%) of Security Signals respondents reported that they don’t have the resources to allocate to more high-impact security work because they are spending too much time on lower-yield manual work like software and patching, hardware upgrades, and mitigating internal and external vulnerabilities. A full 21% of SDMs admit that their firmware data goes unmonitored today.
Lack of automation is another factor causing organizations to lose time and detracting from building better prevention strategies. Seventy-one percent said their staff spends too much time on work that should be automated, and that number creeps up to 82% among the teams who said they don’t have enough time for strategic work. Overall, security teams are spending 41% of their time on firmware patches that could be automated.
Meanwhile, most SDMs (62%) believe more time should be spent on strategic work like setting the strategy and preparing for sophisticated threats like those targeted at firmware.
New investments are accelerating—and paying off
The challenge is global, and many organizations are realizing the importance of investing in these critical areas. Eighty-one percent of the German companies we surveyed were prepared and willing to invest, as compared to 95% of Chinese organizations and 91% of businesses in the U.S., UK, and Japan. Eighty-nine percent of regulated industry companies felt willing and able to invest in security solutions, although those in the financial services sector are not quite as ready to invest as companies in other markets.
Those that do make the right investments are seeing returns, and surveyed organizations that made a real investment in security saw a big payoff. Almost two-thirds (65%) of SDMs reported that investing in security increased efficiency throughout their organizations because it freed up SecOps teams to work on other projects, promoted business continuity, enabled end-user productivity, decreased downtime and saved on investments needed elsewhere.
Across all industry verticals, proven frameworks can lay the groundwork for a successful security strategy that includes automation, increases proactivity, and measures security progress.
“Firmware runs the hardware, but there isn’t a way to inspect to say you are 100% safe with firmware. Firmware attacks are less common (than software), but a successful attack will be largely disruptive.” – SANS Senior Instructor
Hardware security is paramount to protecting from future threats
With our partners, Microsoft has created a new class of devices specifically designed to eliminate threats aimed at firmware called Secured-core PCs. This was recently extended to Server and IOT announced at this year’s Microsoft Ignite conference. With Zero Trust built in from the ground up, this means SDMs will be able to invest more of their resources in strategies and technologies that will prevent attacks in the future rather than constantly defending against the onslaught of attacks aimed at them today.
The SDMs in the study who reported they have invested in secured-core PCs showed a higher level of satisfaction with their security and enhanced confidentiality, availability, and integrity of data as opposed to those not using them. Based on analysis from Microsoft threat intelligence data, secured-core PCs provide more than twice the protection from infection than non-secured-core PCs. Sixty percent of surveyed organizations who invested in secured-core PCs reported supply chain visibility and monitoring as a top concern. According to Accenture’s State of Cyber Resilience report, indirect attacks into the supply chain now account for 40% of security breaches.
Secured-core PCs provide powerhouse protection out of the box, with capabilities such as Virtualization-Based Security, Credential Guard, and Kernel DMA protection. The subsequent automation and out-of-the-box capabilities also free up time for SDMs to focus more of their efforts on high-value and strategic endeavors and less on low-level activities.
Security Signals also found that companies are investing in larger devices to protect against hardware security breaches: more than half are focusing on servers. Microsoft is planning ahead and innovating there as well. With our partners AMD and Intel, we announced the extension of secured-core to servers and edge devices at our virtual Spring Ignite.
To learn more about the more than 100 certified secured-core PCs available today from Microsoft, Acer, Dell, HP, Lenovo, Panasonic, and more, visit our Secured-core web page.
Server investments are high today because they are used as stepping stones in the cloud migration journey.” – Azim Shafqat, Partner at ISG and Former Managing VP at Gartner
The most important takeaway from the Security Signals report is that companies want to have more proactive strategies in place for security, especially when it comes to addressing firmware attacks. Microsoft is working to address that need by partnering with leading PC manufacturers and silicon vendors to establish a proactive strategy towards device security.
Ultimately, those enterprises who align their resources to develop such preventive strategies will give themselves a better chance for business continuity, productivity, and protection from emerging threats.
Methodology
Security Signals research occurred from August – Dec. 2020, when a 20-minute online survey was conducted with 1,000 decision makers involved in security and threat protection decisions at enterprise companies from a range of industries across the US, UK, Germany, China, and Japan.
The Security Signals report works to create a detailed picture of the current security landscape: to understand the unique mindset and priorities that security decision makers (SDMs) bring to their organizations; to shed light on the benefits and challenges of adopting security solutions; to assess what impacts and shapes SDMs’ business decisions; and to see what the future of security may hold. The goal of this paper is to provide up-to-date research on the state of security, across countries and industries, in order to better serve our customers and partners, and enable security decision makers to further their development of security strategies within their organizations.
GitForDelphi allows you to work with git repositories from within your Delphi code, with the only dependencies being the uGitForDelphi.pas source file and the libgit2 DLL.
It’s extremely easy to use GitForDelphi !
To set it up, just add uGitForDelphi to the uses section, and call InitLibgit2; and the libgit2 DLL will be loaded, and its API will be ready to use to read from, create and edit git repositories.
As said in the article on Github, currently, GitForDelphi is exposing the libgit2 C API exactly, all function exports from git2.dll have been converted, including necessary structures. Some of the tests from libgit2 have been converted and are all passing.
I intend to make a wrapper class, TGitRepository to give a nicer Delphi-like interface to working with repositories.
pre-built libgit2 DLL: git2.dll built from Visual C++ 2010 Express is in the binary branch, you can use it while in the master branch like this
git checkout origin/binary — tests/git2.dll; git reset tests/git2.dll See LIBGIT2_sha file for the libgit2 commit that the dll and code are currently based on.
Also said in the article is that permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
This is the best git repository tool. If you want to check it and download, refer to the link below:
There are many data visualization libraries in Python, yet Matplotlib is the most popular library out of all of them. Matplotlib’s popularity is due to its reliability and utility - it's able to create both simple and complex plots with little code. You can also customize the plots in a variety of ways.
In this tutorial, we'll cover how to plot Stack Plots in Matplotlib.
Stack Plots are used to plot linear data, in a vertical order, stacking each linear plot on another. Typically, they're used to generate cumulative plots.
Importing Data
We'll be using a dataset on Covid-19 vaccinations, from Our World in Data, specifically, the dataset that contains the cumulative vaccinations per country.
We’ll begin by importing all the libraries that we need. We’ll import Pandas to read and parse the dataset, Numpy to generate values for the X-axis, and we’ll of course need to import the PyPlot module from Matplotlib:
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
Let's take a peak at the DataFrame we'll be using:
We're interested in the Entity and total_vaccinations. While we could use the Date feature as well, to gain a better grasp of how the vaccinations are going day-by-day, we'll treat the first entry as Day 0 and the last entry as Day N:
This dataset will require some pre-processing, since this is a specific use-case. Though, before pre-processing it, let's get acquainted with how Stack Plots are generally plotted.
Plot a Stack Plot in Matplotlib
Stack Plots are used to visualize multiple linear plots, stacked on top of each other. With a regular line plot, you'd plot the relationship between X and Y. Here, we're plotting multiple Y features on a shared X-axis, one on top of the other:
Since this type of plot can easily get you lost in the stacks, it's really helpful to add labels attached to the colors, by setting the keys() from the y_values dictionary as the labels argument, and adding a legend to the plot:
Note: The length of these lists has to be the same. You can't plot y1 with 3 values, and y2 with 5 values.
This brings us to our Covid-19 vaccination dataset. We'll pre-process the dataset to take the form of a dictionary like this, and plot the cumulative vaccines given to the general population.
Let's start off by grouping the dataset by Entity and total_vaccinations, since each Entity currently has numerous entries. Also, we'll want to drop the entities named World and European Union, since they're convenience entities, added for cases where you might want to plot just a single cumulative line.
In our case, it'll effectively more than double the total_vaccination count, since they include already plotted values of each country, as single entities:
This results in a completely different shape of the dataset - instead of each entry having their own Entity/total_vaccinations entry, each Entity will have a list of their total vaccinations through the days:
However, there's a problem here. We can't plot these entries if their shapes aren't the same. Algeria has 3 entries, while Andorra has 9, for example. To combat this, we'll want to find the key with the most values, and how many values there are.
Then, construct a new dictionary (inadvisable to modify original dictionary while iterating through it) and insert 0s for each missing day in the past, since there were 0 total vaccinations at those days:
max_key, max_value = max(cv_dict.items(), key = lambda x: len(set(x[1])))
cv_dict_full = {}
for k,v in cv_dict.items():
if len(v) < len(max_value):
trailing_zeros = [0]*(len(max_value)-len(v))
cv_dict_full[k] = trailing_zeros+v
else:
cv_dict_full[k] = v
print(cv_dict_full)
Here, we simply check if the length of the list in each entry is shorter than the length of the list with the maximum length. If it is, we add the difference between those, in zeros, and append that value to the original list of values.
Now, if we print this new dictionary, we'll see something along the lines of:
Since there's a lot of countries in the world, the legend will be fairly crammed, so we've put it into 4 columns to at least fit in the plot:
Conclusion
In this tutorial, we've gone over how to plot simple Stack Plots, as well as how to pre-process datasets and shape data to fit Stack Plots, using Python's Pandas and Matplotlib frameworks.
If you're interested in Data Visualization and don't know where to start, make sure to check out our bundle of books on Data Visualization in Python:
Data Visualization in Python with Matplotlib and Pandas is a book designed to take absolute beginners to Pandas and Matplotlib, with basic Python knowledge, and allow them to build a strong foundation for advanced work with theses libraries - from simple plots to animated 3D plots with interactive buttons.
It serves as an in-depth, guide that'll teach you everything you need to know about Pandas and Matplotlib, including how to construct plot types that aren't built into the library itself.
Data Visualization in Python, a book for beginner to intermediate Python developers, guides you through simple data manipulation with Pandas, cover core plotting libraries like Matplotlib and Seaborn, and show you how to take advantage of declarative and experimental libraries like Altair. More specifically, over the span of 11 chapters this book covers 9 Python libraries: Pandas, Matplotlib, Seaborn, Bokeh, Altair, Plotly, GGPlot, GeoPandas, and VisPy.
It serves as a unique, practical guide to Data Visualization, in a plethora of tools you might use in your career.
ROM is Read-Only Memory and the most important type of electronic storage, which comes in-built to the device during manufacturing. You must have seen ROM chips in your computers and other electronic products; game consoles, VCRs, and even car radios all of them use ROM for completing their functions effectively and smoothly.
ROM chips generally come built-in the external unit – just like flash drives or other auxiliary devices –and are installed in the hardware of a device on the removable chip. The non-volatile memory of ROM stays viable even without the power supply. In this post, we will learn more about ROM and different types of ROM to check out:
What is ROM
ROM is a solid-state memory that will read-only data stored. Its feature is that when data gets stored, it will not be changed and deleted. As mentioned before, it is mainly used in the computer and various electronic devices, even though the power gets turned off, data won’t disappear. The widely used type of primary storage is a volatile form of RAM or random access memory that means content present in RAM can be lost if power gets turned off.
Though ROM is a type of non-volatile memory, it’s not appropriate for use as the primary storage because of some limitations. Generally, the non-volatile memories can be expensive, have got lower performance, and have a limited lifetime when compared to the volatile RAM.
How Does ROM Work?
For its important characteristics like data stored in a ROM is written after manufacture to be read during its working process, rather than being rewritten fast and conveniently like the random memory.
Thus, whatever data gets stored in a ROM will be stable, and stored data will not change even after the power is off; this structure is simple, and reading is convenient, therefore it is used for a task of secondary storage, and long-term storage to store different fixed data and programs.
The CPU reads data only in ROM and traditionally, it has not been possible to modify any data in ROM. But, some ROM chips have rewrite abilities, so data can easily be erased from different kinds of ROM. But, data cannot be rewritten and erased nearly as fast as with RAM.
Way ROM Work During the Bootstrapping Process
Whenever you press a power button, the BIOS chip awakens & checks the different components of the system to ensure they are present and working in the right way. In the process known as a power-on test, BIOS instructs your CPU to check code at various locations. During this test, you may hear the whining of the hard drive & see flashing lights. After this test is done, CPU takes over & launches an operating system.
Now, let’s go ahead and discuss different kinds of ROMs and their characteristics.
MROM (Masked Read Only Memory)
The first ROMs were the hard-wired devices, which have the pre-programmed data set and instructions. These types of ROMs are called as masked ROMs that cheap.
PROM (Programmable ROM)
PROM can easily be modified once by the user. They can buy the blank PROM & enters desired contents by using the PROM program. Inside a PROM chip, there’re small fuses that are burnt open at the time of programming. This can be programmed just once and cannot be erased. The blank PROM chip enables current to run over all possible pathways, and the programmer selects the pathway for current just by sending the high voltage over unwanted fuses for “burning” it out. Static electricity will create a similar effect by accident, thus PROMs are highly vulnerable to damage than conventional ROMs.
EPROM (Erasable & Programmable ROM)
EPROM can easily be erased just by exposing this to the ultra-violet light and that also for around 40 minutes. Generally, an EPROM eraser attains this function. During the programming, the electrical charge gets trapped in the insulated gate area. This charge can be retained for over 10 years because the charge has got no leakage path. To erase the charge, the ultra-violet light will be passed through the quartz crystal window. And this exposure to UV light dissipates its charge. During normal use, the quartz lid gets sealed with the sticker and this exposure renders a chip blank again, then you may reprogram it as per the similar process as the PROM. The EPROM chips can wear out eventually, however they often have lifetimes of more than 1000 erasures.
EEPROM (Electrically Erasable & Programmable ROM)
EEPROM can be programmed & erased electrically. This type of ROM can be erased & reprogrammed around ten thousand times. Erasing & programming take 4 – 10 milliseconds. In the EEPROM type, any location is selectively erased or programmed. Also, EEPROMs can get erased just 1 byte at a time, instead of erasing the whole chip. Thus, the reprogramming process will be flexible and slow.
FLASH ROM
It’s the advanced EEPROM version and stores information in the arrangement or range of the memory cells that are made from the floating-gate transistors. One primary benefit of using such memory is you may delete and write blocks of the data over 512 bytes at one time. While, in EEPROM, you may delete or write just 1-byte data at a particular time, thus, this memory type is much faster than the EEPROM.
This memory can be reprogrammed without even removing this from your computer. The access time is a bit high, over 45 – 90 nanoseconds. It’s highly durable since it can easily bear high temperature & intense pressure.
Some Examples of ROM
There’re a few real-life ROM examples, let us see each one:
ROM is used in electronics devices such as feature phones Nokia 3310, Handy Games, DVD, VCR, Digital Watches, or more.
Because of the permanent data storage, ROM can be used in different kinds of the embedded system as in the embedded system, it doesn’t need changing data.
It can also be used in automobiles, where you need data, so data will be saved in a chip.
ROM can also be used in various home appliances like microwave, TV, washing machine, refrigerator, and more.
ROM can be used in automation toys like singing fish toy and in this particular toy, you can store the preplanned program when push the buttons to generate music.
It’s used in various other devices like printers, calculators, FAX machines, plotters, etc.
Share your thoughts about the article in the comment section below.
Most of the time developers spend debugging their code, tracing the errors, checking the variables’ value changes over time.
When it comes to Delphi, we have several professional debugging tools that help to access the Delphi debug information and finding the problems easily.
One of the finest Delphi debugging frameworks is the DebugEngine.
What is DebugEngine?
DebugEngine is a collection of utilities related to debugging stuff (stack trace, CPU registers snapshot, debug info).
The security community is continuously changing, growing, and learning from each other to better position the world against cyber threats. In the latest Voice of the Community blog series post, Microsoft Product Marketing Manager Natalia Godyla talks with Tanya Janca, Founder of We Hack Purple Academy and author of the best-selling book “Alice and Bob Learn Application Security.” Previously, Tanya shared her perspectives on the role of application security (AppSec) and the challenges facing AppSec professionals. In this blog, Tanya shares how to build an AppSec program, find security champions, and measure its success.
Natalia: When you’re building an AppSec program, what are the objectives and requirements?
Tanya: This is sort of a trick question because the way I do it is based on what’s already there and what they want to achieve. For Canada, I did antiterrorism activities, and you better believe that was the strictest security program that any human has ever seen. If I’m working with a company that sells scented soap on the internet, the level of security that they require is very different, their budget is different, and the importance of what they’re protecting is different. I try to figure out what the company’s risks are and what their tolerance is for change. For instance, I’ve been called into a lot of banks and they want the security to be tight, but they’re change-adverse. I find out what matters to them and try to bring their eyes to what should matter to them.
I also usually ask for all scan results. Even if they have almost no AppSec program, usually people have been doing scanning or they’ve had a penetration test. I look at all of it and I look at the top three things and I say, “OK, let’s just obliterate those top three things,” because quite often the top two or three are 40 to 60 percent of their vulnerabilities. First, I stop all the bleeding, and then I create processes and security awareness for developers. We’re going to have a secure coding day and deep dive into each one of these things. I’m going to spend quality time with the people who review all the pull requests so they can look for the top three and start setting specific, measurable goals.
It’s really important to get the developers to help you. When you have a secure coding training, a bunch of developers will self-identify as the security developer. There will be one person who asks multiple questions. We’re going to get that person’s email. They’re our new friend. We’re going to buy that person some books and encourage open communication because that person is going to be our security champion. Eventually, many of my clients start security champion programs and that’s even better because then you have a team of developers—hopefully one per team—that are helping you bring things to their team’s attention.
Natalia: What are some of the key performance indicators (KPIs) for measuring security posture?
Tanya: As application security professionals, we want to minimize the risk of scary apps and then try to bring everything across the board up to a higher security posture. Each organization sets that differently. For an application security program, I would measure that every app receives security attention in every phase of the software development life cycle. For a program, I take inventory of all their apps and APIs. Inventories are a difficult problem in application security; it’s the toughest problem that our field has not solved.
Once you have an inventory, you want to figure out if you can do a quick dynamic application security testing (DAST) scan on everything. You will see it light up like a Christmas tree on some, and on others, it found a couple of lows. It’s not perfect, but it’s what you can do in 30 days. You can scan a whole bunch of things quickly and see OK, so these things are terrifying, these things look OK. Now, let’s concentrate on the terrifying things and make them a little less scary.
Natalia: Do you have any best practices for threat modeling cloud security?
Tanya: For threat modeling generally, I introduce it as a hangout session with a security person and try not to be too formal the first time, because developers usually think, “What is she doing here? Danger, Will Robinson, danger. The security person wants to spend time with us. What have we done wrong?” I say, “I wanted to talk about your app and see if there’s any helpful advice I can offer.” Then, I start asking questions like, “If you were going to hack your app, how would you do it?”
I like the STRIDE methodology, where each of the letters represents a different thing that you need to worry about happening to your apps. Specifically, spoofing, tampering, repudiation, information disclosure, denial of service (DOS), and elevation of privilege. Could someone pretend to be someone else? Could someone pretend to be you? I go through it slowly in a conversational manner because that app is their baby, and I don’t want them to feel like I’m attacking their baby. Eventually, I teach them STRIDE so they can think about these things. Then, we come up with a plan and I say, “OK, I’m going to write up these notes and email them to you.” Writing the notes means you can assign tasks to people.
With threat modeling in the cloud, you must ask more questions, especially if your organization has had previous problems. You want to ask about those because there will be patterns. The biggest issue with the cloud is that we didn’t give them enough education. When we’re bringing them to the cloud, we need to teach them what we expect from them, and then we’ll get it. If we don’t, there’s a high likelihood we won’t get it.
Natalia: How can security professionals convince decision-makers to invest in AppSec?
Tanya: I have a bunch of tricks. The first one is to give presentations on AppSec. I would do lunch and learns. For instance, I sent out an email once to developers: “I’m going to break into a bank at lunch. Who wants to come watch?” and then I showed them this demo of a fake bank. I explained what SQL injection was and I explained how I’d found that vulnerability in one of our apps and what could happen if we didn’t fix it. And they said, “Woah!” Or I’d ask, “Who wants to learn how to hack apps?” and then I showed them a DAST tool. I kept showing them stuff and they started becoming more interested.
Then, I had to interest the developer managers and upper management. Some were still not on board because this was their first AppSec program and my first AppSec program. No one would do what I said, and I had all these penetration test results from a third party, and we had hired four different security assessors and they’d reported big issues that needed to be addressed.
So, I came up with a document called the risk sign-off sheet, which listed all the security risks and exactly what could happen to the business. I was extremely specific about what worried me. I printed it and I had a sign-off for the Director of Security for the whole building and the Chief Information Officer of the entire organization. I went to them and said, “I need your signature that you accept this risk on behalf of your organization.” I put a little note on the risk sign-off sheet that read: Please sign.
The Director of Security called and said, “What is this, Tanya?” and I told him, “No one will fix these things and I don’t have the authority to accept this risk on behalf of the organization. Only you do. I don’t have the authority to make these people fix these things. Only you do. I need you to sign to prove that you were aware of the risks. When we’re in the news, I need to know who’s at fault.” Both the CIO and the Director of Security refused to sign, and I said, “Then you have to give me the authority. I can’t have the responsibility and not have the authority” and it worked. I’ve used it twice at work and it worked.
It’s also important to explain to them using words they understand. The Head of Security, who is in charge of physical security and IT security, was a brilliant man but he didn’t know AppSec. When I explained that because of this vulnerability you can do this with the app, and this is what can result for our customers, he said, “Oh, let’s do something.” I had to learn how to communicate a lot better to do well at AppSec because as a developer, I would just speak developer to other developers.
To learn more about Microsoft Security solutions visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.
We have great new C++ Builder post picks from the LearnCPlusPlus.org website on the basics of connectivity to different database systems. Do you want to learn how to connect to MySQL Server in C++ Builder? Want to use the MyDAC component to connect MySQL Database in C++ Builder? Do you want to connect to PostgreSQL in C++ Builder? Do you want to learn how to use the FireDac component to connect many supported databases in RAD Studio? Do you want to learn how to set up and create a simple database in InterBase? Do you know how to connect to an Interbase database by using FireDAC components?
FireDAC component pack is one of the great component for database connections that comes with RAD Studio, C++ Builder or Delphi officially. FireDAC is a Universal Data Access library for developing applications for multiple devices, connected to enterprise databases. With its powerful universal architecture, FireDAC enables native high-speed direct access from Delphi and C++Builder to InterBase, SQLite, MySQL, SQL Server, Oracle, PostgreSQL, DB2, SQL Anywhere, Advantage DB, Firebird, Access, Informix, DataSnap and more, including the NoSQL Database MongoDB.
Devart is a great company that supports the latest C++ Builder and Delphi and it has MySQL Data Access Components (MyDAC) for C++ Builder, Delphi and it is a library of components that provides direct access to MySQL and MariaDB including Community Edition, as well as Lazarus (and Free Pascal) on Windows, Linux, macOS, iOS, and Android for both 32-bit and 64-bit platforms.
MySQL is one of the world’s most popular open-source databases and it is very easy and very useful to use small to large scale databases.
PostgreSQL is another popular database that is a powerful, open source object-relational database system with over 30 years of active development that has earned it a strong reputation for reliability, feature robustness, and performance.
InterBase is a powerful database developed and supported by Idera / Embarcadero, it has a zero-administration, small-footprint database engine that can power your server and even run on your mobile devices as an embedded database. The InterBase 2020 release adds several new features, including tablespaces support for InterBase, allowing for better performance on servers with multiple data-storage options. InterBase 2020 is an ultrafast, scalable, embeddable SQL database with commercial-grade data security, disaster recovery, and change synchronization abilities.
If you are a beginner or want to jump into C++ Builder please visit our LearnCPlusPlus.org website for the great posts from basics to professional examples, full codes, snippets, etc.
We have explained how to connect these databases and components. We will continue to add more database connectivity options. We have also added how to check compiled OS platform and how to check application is compiled in 32bits or 64bits. Here are our picks for you!
Nuxt.js is what comes to mind when we think about server-side rendering (SSR) in Vue. Besides that, it is also a convention-over-configuration style framework which is very extensible. Known for developer experience, it does a wonderful job at abstraction, but as time goes on, we all have reasons and requirements for diving deeper.
For the second time in recent months, I had the requirement to block a page from loading based on a client-side API request.
This may not seem very interesting to a regular Vue developer, who would probably just add a navigation guard to the route in the Vue Router. But in the world of SSR, this is a catch 22 scenario.
A server-side render implies preparing the page on the server, and not just template markup, but executing stages of its lifecycle. All this, before arriving client-side and then some client-side process will need to show a mask and loading spinner, before either allowing the page or redirecting elsewhere. It doesn’t quite fit well together.
Tools of the trade
Before getting to the nuts and bolts, best review what tools that Nuxt provides which could help.
Middlewares are the Nuxt construct which acts as a navigation guard, just like the Vue Router. Nuxt controls the router as part of its abstraction from the intricacies of a universal application (runs on the client or server).
Plugins allow functionality to be executed with the Nuxt context at application bootstrapping. This may be useful as a SSR only occurs at application bootstrapping, we’ll cover some flows below.
Modules can hook into the project build and also many Nuxt hooks. But for our purposes, they don’t offer anything we need that can’t be achieved another way.
Does serverMiddleware provide anything for our use case? This extension point is very useful, but allows functionality to be applied to the Web Server that Nuxt uses, it does not provide any Nuxt context and lives outside and alongside the Nuxt server-side rendering. Short answer, No.
Middleware is the obvious choice, right?
When it runs on the client-side, it runs as part of a navigation guard in the Vue Router, just like a normal Vue solution would. The question one might ask is, can I set the middleware to run only on the client-side?
Short answer, No. Middlewares by definition are intended to run prior to rendering the page, therefore if it were set to only run client-side, it will run later and is no longer a Middleware.
In universal mode, middlewares will be called once on server-side (on the first request to the Nuxt app, e.g. when directly accessing the app or refreshing the page) and on the client-side when navigating to further routes. With ssr: false, middlewares will be called on the client-side in both situations.
First time reading this might make you think there is some middleware-specific ssr config setting that might make it run client-only, but it is not the case, middlewares will always run client-side with SSR is completely disabled. Not the solution we want.
Plugins will save the day for sure!
So plugins will run both on the server and client side by default, but they can be configured to run only on one. They run at application bootstrap only, so they won’t catch a client-side navigation to a page, so they can’t be the whole solution, but maybe they are part?
Though not explicitly called out in the docs, Plugins can be setup async and blocking. Here’s a test plugin:
Note that the above code could also serve as a middleware too, to see this behaviour in action see this Nuxt CodeSandbox. See the console and navigate between the index and about pages.
And the spanner in the works? Well there are a few, but the biggest one is that although the client-side plugin is technically blocking, it is only blocking for the client-side bootstrapping, the server-side work is already done. This is very clear if you take the sandbox above and set the wait time to say 10secs, you will see that during those 10secs, the server-side rendered markup is staring you in the face! Oops, not quite so blocking, it might explain why the docs don’t call this out the async/await block. The block is still valuable to ensure load order on the client-side, but it doesn’t block the server-rendered content, so it won’t work for us.
What’s left?
The catch 22 can’t be solved by trying to make a square peg fit in a round hole. Going back to our requirement, we need a navigation guard that only runs on the client, and so, if we hit the page directly we have no choice but to defer the SSR and rely on a client-side render. Doing this means we avoid extra processing on the server which best case is a drain on resources, worst case is incorrect and causes defects. But we don’t want SSR completely disabled, there are routes where this won’t run and they should be rendered server-side.
So what does the solution look like? Well a middleware is needed, but if it executes on the server (process.server) then we set some global state to track the work deferred to client-side, otherwise we execute directly, and we use the default layout as a central control point in the rendering process.
export const actions = { async run({ state, dispatch, commit }) { if (!process.client) { throw new Error("deferred queue should never be triggered from a SSR"); } if (!state.running && state.jobs.length) { commit("start"); const runJob = (job) => dispatch(job, null, { root: true }); try { await Promise.allSettled(state.jobs.map(runJob)); } finally { commit("stop"); } } }, // Receive a job to execute client side only, it will execute // immediately if running client-side, otherwise it is queued async clientOnly({ commit, dispatch }, job) { if (process.server) { commit("defer", job); } else { await dispatch(job, null, { root: true }); } } };
The fully working solution can be seen in this Nuxt CodeSandbox. The key files to read are:
/layouts/default.vue
/middleware/test.js
/pages/about.vue
/store/deferred.js
/store/test.js
The deferred module is the generic mechanism for deferring a middleware, it works because the work to be performed is inside a store action (see test module) and the queue is simply a list of strings, each one a store action which is run without a payload.
In the sandbox, refreshing the About page will show that it is created on the client, not the server. Using the layout’s <Nuxt/> as the cutoff between server and client-side, is pretty clean cut, but it isn’t completely black and white. Due to the order of lifecycle, asyncData will actually run before the component is created (read from the route data), and therefore it will run on the server still.
As mentioned at the start of the article, this implementation was based on needing the functionality in 2 places. There could be lots of variations on this solution, the store state might be overkill if you have only a single use. But the premise and flow should remain fairly much the same.
Hope you find the code helpful or the scenario interesting, and a final piece of advice, if in doubt, read the built Nuxt code.
We oftentimes find ourselves counting the number of days from and to a date. Be it calculating when someone's due to return a book, when a subscription should be renewed, how many days have passed since a notification or when a new event is coming up.
In this tutorial, we'll take a look at how to get the number of days between dates in JavaScript.
The Date Object in JavaScript
A JavaScript Date is the Number of ticks (or milliseconds) that have elapsed since the beginning of the UNIX epoch (midnight on January 1, 1970, UTC).
Even though, at heart, a Date object is defined in terms of UTC, all of its methods fetch times and dates in the local time zone:
Date(); // Constructor for a new Date object
Date.now(); // Number of miiliseconds elaspsed since January 1, 1970 00:00:00 UTC
Now that we are familiar with the syntax, let's look at how to get the number of days between two dates using the Date object in JavaScript.
Number of Days Between Dates
To get the number of days between two dates, we'll make a simple function getNumberOfDays(), which accepts two Date objects:
function getNumberOfDays(start, end) {
const date1 = new Date(start);
const date2 = new Date(end);
// One day in milliseconds
const oneDay = 1000 * 60 * 60 * 24;
// Calculating the time difference between two dates
const diffInTime = date2.getTime() - date1.getTime();
// Calculating the no. of days between two dates
const diffInDays = Math.round(diffInTime / oneDay);
return diffInDays;
}
console.log(getNumberOfDays("2/1/2021", "3/1/2021"));
This code results in:
28
The function accepts two Strings, which represent dates. We firstly create Date objects from these strings, after which, we calculate the number of milliseconds in a day. The getTime() function returns the time in milliseconds, between the start of the Unix epoch and current time. So, if we subtract the start date, from the end date, we'll get the number of milliseconds between them.
We can then turn this number of milliseconds, into days, by dividing it with the number of milliseconds in a day, resulting in the number of days between two dates.
Note: This approach includes the start date, but excludes the end date.
Get Number of Days Between Dates in JavaScript with js-joda
Developers familiar with Java will likely be familiar with the widely-used Joda-Time library, which was extremely popular before the Java 8 revamp of the Date/Time API.
Joda-Time's influence inspired the creation of js-joda - a general purpose date/time library for JavaScript, based on the ISO calendar system.
An added benefit is that it's extremely lightweight and really fast, and compared to other libraries such as Moment.js or date-utils, it provides its own implementation of date/time objects, instead of relying on the Date class from the native JavaScript implementation.
Let's import the library through vanilla JavaScript:
Now, we can use the js-joda API. Let's rewrite the previous function to use the LocalDate class, courtesy of js-joda:
const JSJoda = require('js-joda');
const LocalDate = JSJoda.LocalDate;
function getNumberOfDays(start, end) {
const start_date = new LocalDate.parse(start);
const end_date = new LocalDate.parse(end);
return JSJoda.ChronoUnit.DAYS.between(start_date, end_date);
}
console.log(getNumberOfDays("2021-02-01", "2021-03-01"));
This also results in:
28
Note: This approach is also exclusive of the end_date.
Conclusion
In this tutorial, we've taken a look at how to get the number of days between dates in JavaScript. Other than the built-in approach, relying on the Date class, we've also explored the js-joda library, which was inspired by the Java-driven Joda-Time library, for a much more succint approach to solving this problem.
For a side project I am currently working on I needed a simple random image generator. I was recommended to take a look at unsplash.com/random, and it was exactly what I wanted!
But, it worked not exactly how I expected.
It opens images in the browser perfectly fine, but the Unsplash random image API returns a JSON object with lots of extra information about a resource, instead of a binary blob.
The example below is not the complete JSON object from a response. It is a part that includes URLs to different sizes of the image.
I can request one of these images and then format it to a binary response. So I decided to create a Lambda REST API Serverless application that fetches a resource from Unsplash and returns a binary representation of the image instead of a JSON object.
Here you can learn from my experience on how to configure a Lambda function that returns a jpeg image accessible via static URL provided by Amazon API Gateway.
What is AWS Lambda?
AWS Lambda is a service that lets you run code without managing your own server. It uses the Lambda standard runtime environment and works with the resources that Lambda provides. You also get out-of-the-box such features as deployment, maintenance, automatic scaling, code monitoring, and logging. All you need to do is supply your code in one of the languages that Lambda supports, and Lambda will run your code according to a schedule or in response to some events, e.g. it can run your code in response to HTTP requests using API Gateway.
Create a Lambda function
I assume that you already have Node installed and created Unsplash and AWS accounts, which you will need to proceed with this tutorial. AWS Free Tier is a great tool to learn and experiment with AWS functionality.
Note, that during registration, you will have to provide your payment card details, but you can set the budget limit in setting, so you won’t accidentally start getting bills for overuse of resources.
To get started with Lambda, use the Lambda console to create a new function. Click the `Create function` button that will bring you to a basic setup page where you can specify a function name and choose the language to use to write your function.
I selected the `Author from scratch` option and `Node.js 14.x` as my desirable runtime.
I also gave the `GetRandomImage` name to my function.
Lambda automatically creates default code for the function, which allows you to check out the expected format of the handler.
You can write Lambda functions directly in the AWS code editor but if your function depends on external modules or going to significantly grow in size, you should precompile code together with dependencies on your local machine and upload it as a .zip file to AWS Lambda.
Create a project on localhost
Run npm init and create a file named index.js in your project root directory. The source code of the app is pretty simple and small therefore I leave it without explanation here. Feel free to reach out in comments if you have any questions.
The only thing I want to draw your attention to is a part where I prepare the response in a specific format compatible with AWS API Gateway in order to serve images.
I use Axios to download the binary file from the URL, then I convert it to base64 string and tell my Lambda function to return statusCode 200, base64 body, and isBase64Encoded set to true.
To run it locally create the .env file in the project root directory and set UNSPLASH_ACCESS_KEY that you can find it in your Unsplash user settings. Without the access key, you will get the `Request failed with status code 401` error that tells you that the user is unauthorized to request the resource. Then install project dependencies and call the handler:
If your result looks like the gist below, then you are ready to deploy your function to AWS.
Deploy to AWS
We are going to proceed with the `Upload from .zip file` option since our project has npm dependencies. Let’s create a .zip file and upload it to AWS.
Note, that your local version of Node should match the selected Lambda runtime, because you upload precompiled dependencies.
It deploys your function to AWS and loads code to the virtual editor where you can modify it and deploy changes with a single button click.
Note, that if you need to use additional npm dependencies you will have to install them locally, create a new .zip file and upload it again in order to add these precompiled dependencies to AWS.
If you changed the function a lot and your .zip file became bigger than 10 MB, use Amazon S3 to store the file and use URL to the file to upload it from `Amazon S3 location`.
Note, that the total unzipped size of the function and all layers can’t exceed the unzipped deployment package size limit of 250 MB.
Don’t forget to add environment variables to the Lambda configuration section. The production setup does not include the .env file where you locally specified the unsplash access key.
Note, that all access keys, ids, and custom URLs from this tutorial are mocks and they won’t work for you. You should use your own values valid for your Unsplash and AWS accounts.
Change the amount of time that Lambda allows a function to run before stopping it. The default is 3 seconds, make it 20 seconds to give the function enough time to download the image and format the response.
Manually invoke your Lambda function using the sample event data provided in the console. Since our function does not rely on any input parameters, these values won’t affect the function execution.
The green box tells you that the execution succeeded and if you expand the box you will see the exact response we had on localhost.
Last but not least…
Amazon API Gateway configuration
You can create a REST API endpoint for your Lambda function by using Amazon API Gateway, that provides tools for creating and documenting web APIs and can route requests to Lambda functions. The Lambda runtime serializes the response object into JSON and sends it to the API. The API parses the response, uses it to create an HTTP response, and sends it to the client that made the original request.
I have to admit that without previous experience it was pretty tricky to understand what is the right combination of the Lambda function response format and API Gateway settings to return a valid jpeg response by URL.
You can refer to the official documentation because it provides a more detailed explanation of configuration options. I am going to repeat some parts from it here, but I also want to include more pictures and show you a setup that worked for me.
Click `Create API` and select build `REST API` project to get control over the requests and responses of your Lambda project.
If this is your first time using API Gateway, you will see a page that introduces you to the features of the service. As part of the educational process, you will create a demo `Pet Store` endpoint using a sample REST API.
When you are done with the demo project you will be able to create an empty API as follows:
Choose API name, e.g. `LambdaSimpleProxy` and leave `Endpoint type` set to `Regional`. After creating a new API, click on the root resource (/) in the `Resources` tree and from the `Actions` dropdown menu select `Create Resource` item.
Give the name to your resource, for example `photos-random`. Leave `Configure as proxy resource` and `Enable API Gateway CORS` unchecked.
To set up the GET method, on the resources list, click `/photos-random` and from the `Actions` menu, choose `Create method`. Choose GET from the dropdown menu, and click the checkmark icon.
Leave the `Integration type` set to `Lambda Function` and choose `Use Lambda Proxy integration`. In the `Lambda Function` field, type any character and choose `GetRandomImage` from the dropdown menu. Leave `Use Default Timeout` checked and save the method.
After changes are saved click on the GET method and this will open the method execution scheme.
Click on the `Method Response` title and it will open the page with information about the method’s response types, their headers, and content types. Set `Content-Type` as `image/jpeg` in the Response Header for HTTP Status 200.
If you now deploy your REST API you will get the following response in the browser:
We send the image as a base64 string, but API Gateway should be able to convert it from base64 to binary.
Go to API settings and configure binary support for your API by specifying which media types should be treated as binary types. API Gateway will look at the Content-Type and Accept HTTP headers to decide how to handle the body. Use */* to enable all media types.
Open CloudShell that is a browser-based shell with AWS CLI access from the AWS Management Console. You can call the same commands from your local terminal, but for this, you will need AWS CLI installed and configured.
To return a binary blob instead of a base64-encoded payload from the endpoint, we should set the `contentHandling` property of the `IntegrationResponse` resource to `CONVERT_TO_BINARY`. To do this, submit a PATCH request, as follows:
You can find your `rest-api-id` and `resource-id` values at the resource header.
Don’t forget to deploy your REST API.
After completion, you will see a stage editor page with the invoke URL (you can also find this URL on Dashboard).
The invoke URL together with the resource name is the final URL that serves the image as a binary file.
Summary
To summarise I want to list all the stages you have to complete in order to serve images with Lambda:
Create a new Lambda function.
Write the Lambda handler in the virtual editor or on your local machine and deploy code to AWS.
Add environment variables to the Lambda configuration settings.
Change the amount of time that Lambda allows a function to run before stopping it. The default is 3 seconds, make it 20 seconds to give the function enough time to download the image and format the response.
Create Amazon REST API Gateway.
Add a new resource and the GET method to API.
Bind the method to the Lambda function.
Change the method response to `image/jpeg`.
Add */* binary media types to your API settings.
Use AWS Shell to set the `contentHandling` property of the `IntegrationResponse` resource to `CONVERT_TO_BINARY`.
Deploy REST API.
The Lambda function from this tutorial lives on Git and here are some resources that were useful for me:
Node 14 became the LTS version, while Node 15 became the Current version from October 2020! As an odd-numbered release line, Node.js 15 will not be promoted to LTS.
In this article below, you'll find changelogs and download / update information regarding Node.js!
Node.js v15 arrived and became the Current version!
Some features delivered in Node.js 15:
AbortController: AbortController is a global utility class used to signal cancelation in selected Promise-based APIs, based on the AbortController Web API.
N-API Version 7: N-API 7 brings additional methods for working with ArrayBuffers.
npm 7: npm 7 comes with many new features like npm workspaces and a new package-lock.json format. npm 7 includes yarn.lock file support. Peer dependencies are now installed by default.
Throw on unhandled rejections: As of Node.js 15, the default mode for unhandledRejection is changed to throw (from warn). In throw mode, if an unhandledRejection hook is not set, the unhandledRejection is raised as an uncaught exception. Users that have an unhandledRejection hook should see no change in behavior, and it’s still possible to switch modes using the --unhandled-rejections=mode process flag.
QUIC (experimental): QUIC is a UDP-based, underlying transport protocol for HTTP/3. QUIC features inbuilt security with TLS 1.3, flow control, error correction, connection migration, and multiplexing. QUIC can be enabled by compiling Node.js with the --experimental-quic configuration flag. The Node.js QUIC implementation is exposed by the core net module.
V8 8.6: The V8 JavaScript engine has been updated to V8 8.6 (V8 8.4 is the latest available in Node.js 14). Along with performance tweaks and improvements the V8 update also brings the following language features:
Promise.any() (from V8 8.5)
AggregateError (from V8 8.5)
String.prototype.replaceAll() (from V8 8.5)
Logical assignment operators &&=, ||=, and ??= (from V8 8.5)
add optional callback to crypto.sign and crypto.verify
support JWK objects in create*Key
deps:
switch openssl to quictls/openssl
update to cjs-module-lexer@1.1.0
fs:
improve fsPromises writeFile performance
improve fsPromises readFile performance
lib: implement AbortSignal.abort()
node-api: define version 8
worker: add setEnvironmentData/getEnvironmentData
Changelog for Node v15.11.0 (Current)
crypto: make FIPS related options always available
errors: remove experimental from --enable-source-maps
Changelog for Node v15.10.0 (Current)
This is a security release. Vulnerabilities fixed:
CVE-2021-22883: HTTP2 'unknownProtocol' cause Denial of Service by resource exhaustion: Affected Node.js versions are vulnerable to denial of service attacks when too many connection attempts with an 'unknownProtocol' are established. This leads to a leak of file descriptors. If a file descriptor limit is configured on the system, then the server is unable to accept new connections and prevent the process also from opening, e.g. a file. If no file descriptor limit is configured, then this lead to an excessive memory usage and cause the system to run out of memory.
CVE-2021-22884: DNS rebinding in --inspect:
Affected Node.js versions are vulnerable to denial of service attacks when the whitelist includes “localhost6”. When “localhost6” is not present in /etc/hosts, it is just an ordinary domain that is resolved via DNS, i.e., over network. If the attacker controls the victim's DNS server or can spoof its responses, the DNS rebinding protection can be bypassed by using the “localhost6” domain. As long as the attacker uses the “localhost6” domain, they can still apply the attack described in CVE-2018-7160.
CVE-2021-23840: OpenSSL - Integer overflow in CipherUpdate:
This is a vulnerability in OpenSSL which may be exploited through Node.js.
Changelog for Node v15.9.0 (Current)
crypto: add keyObject.export() 'jwk' format option
deps: upgrade to libuv 1.41.0
fs:
add fsPromises.watch()
use a default callback for fs.close()
add AbortSignal support to watch
perf_hooks: introduce createHistogram
stream: improve Readable.from error handling
timers: introduce setInterval async iterator
tls: add ability to get cert/peer cert as X509Certificate object
Changelog for Node v15.8.0 (Current)
crypto: add generatePrime/checkPrime
crypto: experimental (Ed/X)25519/(Ed/X)448 support
deps: upgrade npm to 7.5.0. This update adds a new npm diff command.
dgram: support AbortSignal in createSocket
doc: add Zijian Liu to collaborators
esm: deprecate legacy main lookup for modules
readline: add history event and option to set initial history
readline: add support for the AbortController to the question method
Changelog for Node v15.7.0 (Current)
buffer:
introduce Blob
add base64url encoding option
fs: allow position parameter to be a BigInt in read and readSync
http:
attach request as res.req
expose urlToHttpOptions utility
Changelog for Node v15.6.0 (Current)
child_process:
add 'overlapped' stdio flag
support AbortSignal in fork
crypto:
implement basic secure heap support
fixup bug in keygen error handling
introduce X509Certificate API
implement randomuuid
doc:
update release key for Danielle Adams
add dnlup to collaborators
add panva to collaborators
add yashLadha to collaborator
http: set lifo as the default scheduling strategy in Agent
net: support abortSignal in server.listen
process: add direct access to rss without iterating pages
v8: fix native serdes constructors
Changelog for Node v15.5.1 (Current)
This is a security release. Vulnerabilities fixed:
CVE-2020-8265: use-after-free in TLSWrap (High): Affected Node.js versions are vulnerable to a use-after-free bug in its TLS implementation. When writing to a TLS enabled socket, node::StreamBase::Write calls node::TLSWrap::DoWrite with a freshly allocated WriteWrap object as first argument. If the DoWrite method does not return an error, this object is passed back to the caller as part of a StreamWriteResult structure. This may be exploited to corrupt memory leading to a Denial of Service or potentially other exploits.
CVE-2020-8287: HTTP Request Smuggling in nodejs (Low): Affected versions of Node.js allow two copies of a header field in a http request. For example, two Transfer-Encoding header fields. In this case Node.js identifies the first header field and ignores the second. This can lead to HTTP Request Smuggling.
Changelog for Node v15.5.0 (Current)
OpenSSL-1.1.1i: OpenSSL-1.1.1i contains a fix for CVE-2020-1971: OpenSSL - EDIPARTYNAME NULL pointer de-reference (High). This is a vulnerability in OpenSSL which may be exploited through Node.js.
Extended support for AbortSignal in child_process and stream:
The following APIs now support an AbortSignal in their options object:
child_process.spawn(): Calling .abort() on the corresponding AbortController is similar to calling .kill() on the child process except the error passed to the callback will be an AbortError
new stream.Writable() and new stream.Readable(): Calling .abort() on the corresponding AbortController will behave the same way as calling .destroy(new AbortError()) on the stream
BigInt support in querystring.stringify():
If querystring.stringify() is called with an object that contains BigInt values, they will now be serialized to their decimal representation instead of the empty string.
Additions to the C++ embedder APIs:
A new IsolateSettingsFlag is available for those calling SetIsolateUpForNode() : SHOULD_NOT_SET_PREPARE_STACK_TRACE_CALLBACK can be used to prevent Node.js from setting a custom callback to prepare stack traces.
stream: add FileHandle support to Read/WriteStream
worker: add experimental BroadcastChannel
Changelog for Node v15.3.0 (Current)
dns: add a cancel() method to the promise Resolver
events: add max listener warning for EventTarget
http: add support for abortsignal to http.request
http2: allow setting the local window size of a session
lib: add throws option to fs.f/l/statSync
path: add path/posix and path/win32 alias modules
readline: add getPrompt to get the current prompt
src: add loop idle time in diagnostic report
util: add util/types alias module
Changelog for Node v15.2.0 (Current)
events: getEventListeners static
fs: support abortsignal in writeFile, add support for AbortSignal in readFile
stream: fix thrown object reference
Changelog for Node v15.1.0 (Current)
Diagnostics channel (experimental module):diagnostics_channel is a new experimental module that provides an API to create named channels to report arbitrary message data for diagnostics purposes. With diagnostics_channel, Node.js core and module authors can publish contextual data about what they are doing at a given time. This could be the hostname and query string of a mysql query, for example.
New child process 'spawn' event: Instances of ChildProcess now emit a new 'spawn' event once the child process has spawned successfully. If emitted, the 'spawn' event comes before all other events and before any data is received via stdout or stderr. The 'spawn' event will fire regardless of whether an error occurs within the spawned process. For example, if bash some-command spawns successfully, the 'spawn' event will fire, though bash may fail to spawn some-command. This caveat also applies when using { shell: true }.
Set the local address for DNS resolution: It is now possible to set the local IP address used by a Resolver instance to send its requests. This allows programs to specify outbound interfaces when used on multi-homed systems. The resolver will use the v4 local address when making requests to IPv4 DNS servers, and the v6 local address when making requests to IPv6 DNS servers.
Control V8 coverage at runtime: The v8 module includes two new methods to control the V8 coverage started by the NODE_V8_COVERAGE environment variable. With v8.takeCoverage(), it is possible to write a coverage report to disk on demand. This can be done multiple times during the lifetime of the process, and the execution counter will be reset on each call. When the process is about to exit, one last coverage will still be written to disk, unless v8.stopCoverage() was invoked before. The v8.stopCoverage() method allows to stop the coverage collection, so that V8 can release the execution counters and optimize code.
Analyze Worker's event loop utilization:Worker instances now have a performance property, with a single eventLoopUtilization method that can be used to gather information about the worker's event loop utilization between the 'online' and 'exit' events. The method works the same way as perf_hookseventLoopUtilization().
Take a V8 heap snapshot just before running out of memory (experimental):
With the new --heapsnapshot-near-heap-limit=max_count experimental command line flag, it is now possible to automatically generate a heap snapshot when the V8 heap usage is approaching the heap limit. count should be a non-negative integer (in which case Node.js will write no more than max_count snapshots to disk). When generating snapshots, garbage collection may be triggered and bring the heap usage down, therefore multiple snapshots may be written to disk before the Node.js instance finally runs out of memory. These heap snapshots can be compared to determine what objects are being allocated during the time consecutive snapshots are taken.
The highlights in this release include improved diagnostics, an upgrade of V8, an experimental Async Local Storage API, hardening of the streams APIs, removal of the Experimental Modules warning, and the removal of some long deprecated APIs.
Node.js 14 was promoted to Long-term Support (LTS) in October 2020. As a reminder — both Node.js 12 and Node.js 10 will remain in long-term support until April 2022 and April 2021 respectively.
Node.js LTS v14 Changelogs
Changelog for Node v14.16.0 (Current)
This is a security release. Vulnerabilities fixed:
CVE-2021-22883: HTTP2 'unknownProtocol' cause Denial of Service by resource exhaustion: Affected Node.js versions are vulnerable to denial of service attacks when too many connection attempts with an 'unknownProtocol' are established. This leads to a leak of file descriptors. If a file descriptor limit is configured on the system, then the server is unable to accept new connections and prevent the process also from opening, e.g. a file. If no file descriptor limit is configured, then this lead to an excessive memory usage and cause the system to run out of memory.
CVE-2021-22884: DNS rebinding in --inspect: Affected Node.js versions are vulnerable to denial of service attacks when the whitelist includes “localhost6”. When “localhost6” is not present in /etc/hosts, it is just an ordinary domain that is resolved via DNS, i.e., over network. If the attacker controls the victim's DNS server or can spoof its responses, the DNS rebinding protection can be bypassed by using the “localhost6” domain. As long as the attacker uses the “localhost6” domain, they can still apply the attack described in CVE-2018-7160.
CVE-2021-23840: OpenSSL - Integer overflow in CipherUpdate: This is a vulnerability in OpenSSL which may be exploited through Node.js.
Changelog for Node v14.15.5 (Current)
deps:
upgrade npm to 6.14.11
V8: backport dfcf1e86fac0
stream,zlib: do not use stream* anymore
Changelog for Node v14.15.4 (Current)
This is a security release. Vulnerabilities fixed:
CVE-2020-1971: OpenSSL - EDIPARTYNAME NULL pointer de-reference (High): This is a vulnerability in OpenSSL which may be exploited through Node.js.
CVE-2020-8265: use-after-free in TLSWrap (High): Affected Node.js versions are vulnerable to a use-after-free bug in its TLS implementation. When writing to a TLS enabled socket, node::StreamBase::Write calls node::TLSWrap::DoWrite with a freshly allocated WriteWrap object as first argument. If the DoWrite method does not return an error, this object is passed back to the caller as part of a StreamWriteResult structure. This may be exploited to corrupt memory leading to a Denial of Service or potentially other exploits.
CVE-2020-8287: HTTP Request Smuggling in nodejs (Low): Affected versions of Node.js allow two copies of a header field in a http request. For example, two Transfer-Encoding header fields. In this case Node.js identifies the first header field and ignores the second. This can lead to HTTP Request Smuggling
Changelog for Node v14.15.3 (Current)
Node.js v14.15.2 included a commit that has caused reported breakages when cloning request objects. This release reverts the commit that introduced the behaviour change. See https://github.com/nodejs/node/issues/36550 for more details.
Changelog for Node v14.15.2 (Current)
deps: upgrade npm to 6.14.9
deps: update acorn to v8.0.4
doc: add release key for Danielle Adams
http2: check write not scheduled in scope destructor
stream: fix regression on duplex end
Changelog for Node v14.15.1 (Current)
This is a security release. Vulnerabilities fixed:
CVE-2020-8277: Denial of Service through DNS request (High). A Node.js application that allows an attacker to trigger a DNS request for a host of their choice could trigger a Denial of Service by getting the application to resolve a DNS record with a larger number of responses.
Changelog for Node v14.15.0 (Current)
This release marks the transition of Node.js 14.x into Long Term Support (LTS) with the codename 'Fermium'. The 14.x release line now moves into "Active LTS" and will remain so until October 2021. After that time, it will move into "Maintenance" until end of life in April 2023.
doc: add missing link in Node.js 14 Changelog
doc: fix Node.js 14.x changelogs
Revert "test: mark test-webcrypto-encrypt-decrypt-aes flaky"
Changelog for Node v14.14.0 (Current)
crypto: update certdata to NSS 3.56
doc: add aduh95 to collaborators
fs: add rm method
http: allow passing array of key/val into writeHead
src: expose v8::Isolate setup callbacks
Changelog for Node v14.13.1 (Current)
fs: remove experimental from rmdir recursive
Changelog for Node v14.13.0 (Current)
deps: upgrade to libuv 1.40.0
module: named exports for CJS via static analysis
module: exports pattern support
src: allow N-API addon in AddLinkedBinding()
Changelog for Node v14.12.0 (Current)
deps:
update to uvwasi 0.0.11
n-api:
create N-API version 7
add more property defaults
Changelog for Node v14.11.0 (Current)
This is a security release. Vulnerabilities fixed:
Denial of Service by resource exhaustion CWE-400 due to unfinished HTTP/1.1 requests.
HTTP Request Smuggling due to CR-to-Hyphen conversion.
Changelog for Node v14.10.1 (Current)
Node.js 14.10.0 included a streams regression with async generators and a docs rendering regression that are being fixed in this release.
Changelog for Node v14.10.0 (Current)
buffer: also alias BigUInt methods
crypto: add randomInt function
perf_hooks: add idleTime and event loop util
stream: simpler and faster Readable async iterator
worker: (SEMVER-MINOR) add option to track unmanaged file descriptors
Changelog for Node v14.5.0 (Current)
V8 engine is updated to version 8.3
This version includes performance improvements and now allows WebAssembly modules to request memories up to 4GB in size. For more information, have a look at the official V8 blog post.
Initial experimental implementation of EventTarget
This version introduces an new experimental API EventTarget, which provides a DOM interface implemented by objects that can receive events and may have listeners for them. It is an adaptation of the Web API EventTarget.
Changelog for Node v14.4.0 (Current)
This is a security release. Vulnerabilities fixed:
CVE-2020-8172: TLS session reuse can lead to host certificate verification bypass (High).
CVE-2020-11080: HTTP/2 Large Settings Frame DoS (Low).
CVE-2020-8174:napi_get_value_string_*() allows various kinds of memory corruption (High).
Changelog for Node v14.3.0 (Current)
REPL previews improvements with autocompletion: The output preview is changed to generate previews for autocompleted input instead of the actual input. Pressing <enter> during a preview is now going to evaluate the whole string including the autocompleted part. Pressing <escape> cancels that behavior.
Support for Top-Level Await: It's now possible to use the await keyword outside of async functions, with the --experimental-top-level-await flag.
Changelog for Node v14.2.0 (Current)
Track function calls with assert.CallTracker:assert.CallTracker is a new experimental API that allows to track and later verify the number of times a function was called.
Console groupIndentation option: The Console constructor (require('console').Console) now supports different group indentations.
Changelog for Node v14.1.0 (Current)
deps: upgrade openssl sources to 1.1.1g
doc: add juanarbol as collaborator
http: doc deprecate abort and improve docs
module: do not warn when accessing __esModule of unfinished exports
n-api: detect deadlocks in thread-safe function
src: deprecate embedder APIs with replacements
stream:
don't emit end after close
don't wait for close on legacy streams
pipeline should only destroy un-finished streams
vm: add importModuleDynamically option to compileFunction
Changelog for Node v14.0.0 (Current)
Deprecations:
(SEMVER-MAJOR) crypto: move pbkdf2 without digest to EOL
(SEMVER-MAJOR) fs: deprecate closing FileHandle on garbage collection
(SEMVER-MAJOR) http: move OutboundMessage.prototype.flush to EOL
(SEMVER-MAJOR) lib: move GLOBAL and root aliases to EOL
(SEMVER-MAJOR) os: move tmpDir() to EOL
(SEMVER-MAJOR) src: remove deprecated wasm type check
(SEMVER-MAJOR) stream: move _writableState.buffer to EOL
(SEMVER-MINOR) doc: deprecate process.mainModule
(SEMVER-MINOR) doc: deprecate process.umask() with no arguments
ECMAScript Modules - Experimental Warning Removal
In Node.js 13 we removed the need to include the --experimental-modules flag, but when running EcmaScript Modules in Node.js, this would still result in a warning ExperimentalWarning: The ESM module loader is experimental.
As of Node.js 14 there is no longer this warning when using ESM in Node.js. However, the ESM implementation in Node.js remains experimental. As per our stability index: “The feature is not subject to Semantic Versioning rules. Non-backward compatible changes or removal may occur in any future release.” Users should be cautious when using the feature in production environments.
The ESM implementation in Node.js is still experimental but we do believe that we are getting very close to being able to call ESM in Node.js “stable”. Removing the warning is a huge step in that direction.
New V8 ArrayBuffer API src: migrate to new V8 ArrayBuffer API. Multiple ArrayBuffers pointing to the same base address are no longer allowed by V8. This may impact native addons.
Toolchain and Compiler Upgrades
(SEMVER-MAJOR) build: update macos deployment target to 10.13 for 14.x
(SEMVER-MAJOR) doc: update cross compiler machine for Linux armv7
(SEMVER-MAJOR) doc: update Centos/RHEL releases use devtoolset-8
(SEMVER-MAJOR) doc: remove SmartOS from official binaries
(SEMVER-MAJOR) win: block running on EOL Windows versions
It is expected that there will be an ABI mismatch on ARM between the Node.js binary and native addons. Native addons are only broken if they interact with std::shared_ptr. This is expected to be fixed in a later version of Node.js 14.
Update to V8 8.1 (SEMVER-MAJOR) deps: update V8 to 8.1.307.20
Enables Optional Chaining by default
Enables Nullish Coalescing by default
Enables Intl.DisplayNames by default
Enables calendar and numberingSystem options for Intl.DateTimeFormat by default
Other Notable Changes:
cli, report: move --report-on-fatalerror to stable
deps: upgrade to libuv 1.37.0
fs: add fs/promises alias module
Learn More Node.js from RisingStack
At RisingStack we've been writing JavaScript / Node tutorials for the community in the past 5 years. If you're beginner to Node.js, we recommend checking out our Node Hero tutorial series! The goal of this series is to help you get started with Node.js and make sure you understand how to write an application using it.
XGBoost is a powerful and effective implementation of the gradient boosting ensemble algorithm.
It can be challenging to configure the hyperparameters of XGBoost models, which often leads to using large grid search experiments that are both time consuming and computationally expensive.
An alternate approach to configuring XGBoost models is to evaluate the performance of the model each iteration of the algorithm during training and to plot the results as learning curves. These learning curve plots provide a diagnostic tool that can be interpreted and suggest specific changes to model hyperparameters that may lead to improvements in predictive performance.
In this tutorial, you will discover how to plot and interpret learning curves for XGBoost models in Python.
After completing this tutorial, you will know:
Learning curves provide a useful diagnostic tool for understanding the training dynamics of supervised learning models like XGBoost.
How to configure XGBoost to evaluate datasets each iteration and plot the results as learning curves.
How to interpret and use learning curve plots to improve XGBoost model performance.
Let’s get started.
Tune XGBoost Performance With Learning Curves Photo by Bernard Spragg. NZ, some rights reserved.
Tutorial Overview
This tutorial is divided into four parts; they are:
Extreme Gradient Boosting
Learning Curves
Plot XGBoost Learning Curve
Tune XGBoost Model Using Learning Curves
Extreme Gradient Boosting
Gradient boosting refers to a class of ensemble machine learning algorithms that can be used for classification or regression predictive modeling problems.
Ensembles are constructed from decision tree models. Trees are added one at a time to the ensemble and fit to correct the prediction errors made by prior models. This is a type of ensemble machine learning model referred to as boosting.
Models are fit using any arbitrary differentiable loss function and gradient descent optimization algorithm. This gives the technique its name, “gradient boosting,” as the loss gradient is minimized as the model is fit, much like a neural network.
Extreme Gradient Boosting, or XGBoost for short, is an efficient open-source implementation of the gradient boosting algorithm. As such, XGBoost is an algorithm, an open-source project, and a Python library.
It is designed to be both computationally efficient (e.g. fast to execute) and highly effective, perhaps more effective than other open-source implementations.
The two main reasons to use XGBoost are execution speed and model performance.
XGBoost dominates structured or tabular datasets on classification and regression predictive modeling problems. The evidence is that it is the go-to algorithm for competition winners on the Kaggle competitive data science platform.
Among the 29 challenge winning solutions 3 published at Kaggle’s blog during 2015, 17 solutions used XGBoost. […] The success of the system was also witnessed in KDDCup 2015, where XGBoost was used by every winning team in the top-10.
Now that we are familiar with what XGBoost is and why it is important, let’s take a closer look at learning curves.
Learning Curves
Generally, a learning curve is a plot that shows time or experience on the x-axis and learning or improvement on the y-axis.
Learning curves are widely used in machine learning for algorithms that learn (optimize their internal parameters) incrementally over time, such as deep learning neural networks.
The metric used to evaluate learning could be maximizing, meaning that better scores (larger numbers) indicate more learning. An example would be classification accuracy.
It is more common to use a score that is minimizing, such as loss or error whereby better scores (smaller numbers) indicate more learning and a value of 0.0 indicates that the training dataset was learned perfectly and no mistakes were made.
During the training of a machine learning model, the current state of the model at each step of the training algorithm can be evaluated. It can be evaluated on the training dataset to give an idea of how well the model is “learning.” It can also be evaluated on a hold-out validation dataset that is not part of the training dataset. Evaluation on the validation dataset gives an idea of how well the model is “generalizing.”
It is common to create dual learning curves for a machine learning model during training on both the training and validation datasets.
The shape and dynamics of a learning curve can be used to diagnose the behavior of a machine learning model, and in turn, perhaps suggest the type of configuration changes that may be made to improve learning and/or performance.
There are three common dynamics that you are likely to observe in learning curves; they are:
Underfit.
Overfit.
Good Fit.
Most commonly, learning curves are used to diagnose overfitting behavior of a model that can be addressed by tuning the hyperparameters of the model.
Overfitting refers to a model that has learned the training dataset too well, including the statistical noise or random fluctuations in the training dataset.
The problem with overfitting is that the more specialized the model becomes to training data, the less well it is able to generalize to new data, resulting in an increase in generalization error. This increase in generalization error can be measured by the performance of the model on the validation dataset.
Now that we are familiar with learning curves, let’s look at how we might plot learning curves for XGBoost models.
Plot XGBoost Learning Curve
In this section, we will plot the learning curve for an XGBoost model.
First, we need a dataset to use as the basis for fitting and evaluating the model.
We will use a synthetic binary (two-class) classification dataset in this tutorial.
The make_classification() scikit-learn function can be used to create a synthetic classification dataset. In this case, we will use 50 input features (columns) and generate 10,000 samples (rows). The seed for the pseudo-random number generator is fixed to ensure the same base “problem” is used each time samples are generated.
The example below generates the synthetic classification dataset and summarizes the shape of the generated data.
# test classification dataset
from sklearn.datasets import make_classification
# define dataset
X, y = make_classification(n_samples=10000, n_features=50, n_informative=50, n_redundant=0, random_state=1)
# summarize the dataset
print(X.shape, y.shape)
Running the example generates the data and reports the size of the input and output components, confirming the expected shape.
(10000, 50) (10000,)
Next, we can fit an XGBoost model on this dataset and plot learning curves.
First, we must split the dataset into one portion that will be used to train the model (train) and another portion that will not be used to train the model, but will be held back and used to evaluate the model each step of the training algorithm (test set or validation set).
...
# split data into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.50, random_state=1)
We can then define an XGBoost classification model with default hyperparameters.
...
# define the model
model = XGBClassifier()
Next, the model can be fit on the dataset.
In this case, we must specify to the training algorithm that we want it to evaluate the performance of the model on the train and test sets each iteration (e.g. after each new tree is added to the ensemble).
To do this we must specify the datasets to evaluate and the metric to evaluate.
The dataset must be specified as a list of tuples, where each tuple contains the input and output columns of a dataset and each element in the list is a different dataset to evaluate, e.g. the train and the test sets.
...
# define the datasets to evaluate each iteration
evalset = [(X_train, y_train), (X_test,y_test)]
There are many metrics we may want to evaluate, although given that it is a classification task, we will evaluate the log loss (cross-entropy) of the model which is a minimizing score (lower values are better).
This can be achieved by specifying the “eval_metric” argument when calling fit() and providing it the name of the metric we will evaluate ‘logloss‘. We can also specify the datasets to evaluate via the “eval_set” argument. The fit() function takes the training dataset as the first two arguments as per normal.
...
# fit the model
model.fit(X_train, y_train, eval_metric='logloss', eval_set=evalset)
Once the model is fit, we can evaluate its performance as the classification accuracy on the test dataset.
This returns a dictionary organized first by dataset (‘validation_0‘ and ‘validation_1‘) and then by metric (‘logloss‘).
We can create line plots of metrics for each dataset.
...
# plot learning curves
pyplot.plot(results['validation_0']['logloss'], label='train')
pyplot.plot(results['validation_1']['logloss'], label='test')
# show the legend
pyplot.legend()
# show the plot
pyplot.show()
And that’s it.
Tying all of this together, the complete example of fitting an XGBoost model on the synthetic classification task and plotting learning curves is listed below.
# plot learning curve of an xgboost model
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from xgboost import XGBClassifier
from matplotlib import pyplot
# define dataset
X, y = make_classification(n_samples=10000, n_features=50, n_informative=50, n_redundant=0, random_state=1)
# split data into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.50, random_state=1)
# define the model
model = XGBClassifier()
# define the datasets to evaluate each iteration
evalset = [(X_train, y_train), (X_test,y_test)]
# fit the model
model.fit(X_train, y_train, eval_metric='logloss', eval_set=evalset)
# evaluate performance
yhat = model.predict(X_test)
score = accuracy_score(y_test, yhat)
print('Accuracy: %.3f' % score)
# retrieve performance metrics
results = model.evals_result()
# plot learning curves
pyplot.plot(results['validation_0']['logloss'], label='train')
pyplot.plot(results['validation_1']['logloss'], label='test')
# show the legend
pyplot.legend()
# show the plot
pyplot.show()
Running the example fits the XGBoost model, retrieves the calculated metrics, and plots learning curves.
Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.
First, the model performance is reported, showing that the model achieved a classification accuracy of about 94.5% on the hold-out test set.
Accuracy: 0.945
The plot shows learning curves for the train and test dataset where the x-axis is the number of iterations of the algorithm (or the number of trees added to the ensemble) and the y-axis is the logloss of the model. Each line shows the logloss per iteration for a given dataset.
From the learning curves, we can see that the performance of the model on the training dataset (blue line) is better or has lower loss than the performance of the model on the test dataset (orange line), as we might generally expect.
Learning Curves for the XGBoost Model on the Synthetic Classification Dataset
Now that we know how to plot learning curves for XGBoost models, let’s look at how we might use the curves to improve model performance.
Tune XGBoost Model Using Learning Curves
We can use the learning curves as a diagnostic tool.
The curves can be interpreted and used as the basis for suggesting specific changes to the model configuration that might result in better performance.
The model and result in the previous section can be used as a baseline and starting point.
Looking at the plot, we can see that both curves are sloping down and suggest that more iterations (adding more trees) may result in a further decrease in loss.
Let’s try it out.
We can increase the number of iterations of the algorithm via the “n_estimators” hyperparameter that defaults to 100. Let’s increase it to 500.
...
# define the model
model = XGBClassifier(n_estimators=500)
The complete example is listed below.
# plot learning curve of an xgboost model
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from xgboost import XGBClassifier
from matplotlib import pyplot
# define dataset
X, y = make_classification(n_samples=10000, n_features=50, n_informative=50, n_redundant=0, random_state=1)
# split data into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.50, random_state=1)
# define the model
model = XGBClassifier(n_estimators=500)
# define the datasets to evaluate each iteration
evalset = [(X_train, y_train), (X_test,y_test)]
# fit the model
model.fit(X_train, y_train, eval_metric='logloss', eval_set=evalset)
# evaluate performance
yhat = model.predict(X_test)
score = accuracy_score(y_test, yhat)
print('Accuracy: %.3f' % score)
# retrieve performance metrics
results = model.evals_result()
# plot learning curves
pyplot.plot(results['validation_0']['logloss'], label='train')
pyplot.plot(results['validation_1']['logloss'], label='test')
# show the legend
pyplot.legend()
# show the plot
pyplot.show()
Running the example fits and evaluates the model and plots the learning curves of model performance.
Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.
We can see that more iterations have resulted in a lift in accuracy from about 94.5% to about 95.8%.
Accuracy: 0.958
We can see from the learning curves that indeed the additional iterations of the algorithm caused the curves to continue to drop and then level out after perhaps 150 iterations, where they remain reasonably flat.
Learning Curves for the XGBoost Model With More Iterations
The long flat curves may suggest that the algorithm is learning too fast and we may benefit from slowing it down.
This can be achieved using the learning rate, which limits the contribution of each tree added to the ensemble. This can be controlled via the “eta” hyperparameter and defaults to the value of 0.3. We can try a smaller value, such as 0.05.
...
# define the model
model = XGBClassifier(n_estimators=500, eta=0.05)
The complete example is listed below.
# plot learning curve of an xgboost model
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from xgboost import XGBClassifier
from matplotlib import pyplot
# define dataset
X, y = make_classification(n_samples=10000, n_features=50, n_informative=50, n_redundant=0, random_state=1)
# split data into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.50, random_state=1)
# define the model
model = XGBClassifier(n_estimators=500, eta=0.05)
# define the datasets to evaluate each iteration
evalset = [(X_train, y_train), (X_test,y_test)]
# fit the model
model.fit(X_train, y_train, eval_metric='logloss', eval_set=evalset)
# evaluate performance
yhat = model.predict(X_test)
score = accuracy_score(y_test, yhat)
print('Accuracy: %.3f' % score)
# retrieve performance metrics
results = model.evals_result()
# plot learning curves
pyplot.plot(results['validation_0']['logloss'], label='train')
pyplot.plot(results['validation_1']['logloss'], label='test')
# show the legend
pyplot.legend()
# show the plot
pyplot.show()
Running the example fits and evaluates the model and plots the learning curves of model performance.
Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.
We can see that the smaller learning rate has made the accuracy worse, dropping from about 95.8% to about 95.1%.
Accuracy: 0.951
We can see from the learning curves that indeed learning has slowed right down. The curves suggest that we can continue to add more iterations and perhaps achieve better performance as the curves would have more opportunity to continue to decrease.
Learning Curves for the XGBoost Model With Smaller Learning Rate
Let’s try increasing the number of iterations from 500 to 2,000.
...
# define the model
model = XGBClassifier(n_estimators=2000, eta=0.05)
The complete example is listed below.
# plot learning curve of an xgboost model
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from xgboost import XGBClassifier
from matplotlib import pyplot
# define dataset
X, y = make_classification(n_samples=10000, n_features=50, n_informative=50, n_redundant=0, random_state=1)
# split data into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.50, random_state=1)
# define the model
model = XGBClassifier(n_estimators=2000, eta=0.05)
# define the datasets to evaluate each iteration
evalset = [(X_train, y_train), (X_test,y_test)]
# fit the model
model.fit(X_train, y_train, eval_metric='logloss', eval_set=evalset)
# evaluate performance
yhat = model.predict(X_test)
score = accuracy_score(y_test, yhat)
print('Accuracy: %.3f' % score)
# retrieve performance metrics
results = model.evals_result()
# plot learning curves
pyplot.plot(results['validation_0']['logloss'], label='train')
pyplot.plot(results['validation_1']['logloss'], label='test')
# show the legend
pyplot.legend()
# show the plot
pyplot.show()
Running the example fits and evaluates the model and plots the learning curves of model performance.
Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.
We can see that more iterations have given the algorithm more space to improve, achieving an accuracy of 96.1%, the best so far.
Accuracy: 0.961
The learning curves again show a stable convergence of the algorithm with a steep decrease and long flattening out.
Learning Curves for the XGBoost Model With Smaller Learning Rate and Many Iterations
We could repeat the process of decreasing the learning rate and increasing the number of iterations to see if further improvements are possible.
Another approach to slowing down learning is to add regularization in the form of reducing the number of samples and features (rows and columns) used to construct each tree in the ensemble.
In this case, we will try halving the number of samples and features respectively via the “subsample” and “colsample_bytree” hyperparameters.
...
# define the model
model = XGBClassifier(n_estimators=2000, eta=0.05, subsample=0.5, colsample_bytree=0.5)
The complete example is listed below.
# plot learning curve of an xgboost model
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from xgboost import XGBClassifier
from matplotlib import pyplot
# define dataset
X, y = make_classification(n_samples=10000, n_features=50, n_informative=50, n_redundant=0, random_state=1)
# split data into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.50, random_state=1)
# define the model
model = XGBClassifier(n_estimators=2000, eta=0.05, subsample=0.5, colsample_bytree=0.5)
# define the datasets to evaluate each iteration
evalset = [(X_train, y_train), (X_test,y_test)]
# fit the model
model.fit(X_train, y_train, eval_metric='logloss', eval_set=evalset)
# evaluate performance
yhat = model.predict(X_test)
score = accuracy_score(y_test, yhat)
print('Accuracy: %.3f' % score)
# retrieve performance metrics
results = model.evals_result()
# plot learning curves
pyplot.plot(results['validation_0']['logloss'], label='train')
pyplot.plot(results['validation_1']['logloss'], label='test')
# show the legend
pyplot.legend()
# show the plot
pyplot.show()
Running the example fits and evaluates the model and plots the learning curves of model performance.
Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.
We can see that the addition of regularization has resulted in a further improvement, bumping accuracy from about 96.1% to about 96.6%.
Accuracy: 0.966
The curves suggest that regularization has slowed learning and that perhaps increasing the number of iterations may result in further improvements.
Learning Curves for the XGBoost Model with Regularization
This process can continue, and I am interested to see what you can come up with.
Further Reading
This section provides more resources on the topic if you are looking to go deeper.
In this article, we will have a look at another interesting algorithm related to Graph Theory – Boruvka’s Algorithm. We will also look at a problem with respect to this algorithm, discuss our approach and analyze the complexities.
Boruvka’s Algorithm is mainly used to find or derive a Minimum Spanning Tree of an Edge-weighted Graph. Let us have a quick look at the concept of a Minimum Spanning Tree. A Minimum Spanning Tree or a MST is a subset of edges of a Weighted, Un-directed graph; such that it connects all the vertices together. The resultant subset of the graph must have no cycles or loops within it. Moreover, It should have minimum possible total weight of the edges connecting the tree.
Note: The Minimum Spanning Tree should connect all its vertices. A disconnected graph is not a MST.
Let us understand this with an example, consider this graph:
The above shown graph is an Edge-Weighted, undirected graph with 6 vertices. The minimum Spanning tree of the above graph looks like this:
Explanation:
The above image shows the Minimum Spanning tree of Graph G, as it connects all the vertices together. The resultant graph has no loops or cycles within it. After trying out various examples we select the smallest edge from each vertex and connect the two vertices together. We avoid connecting those edges which are already processed as it may form a cycle. The Minimum Possible Weight obtained from this MST adding all the weights from respective edges: 1 ( Edge 1 to 2 ) + 4 ( Edge 1 to 4 ) + 5 ( Edge 2 to 3 ) + 3 ( Edge 2 to 5 ) +2 (Edge 5 to 6 ); so Total Weight of MST = 15.
Note: The Maximum Number of Edges present in a Minimum Spanning Tree = Number of Vertices – 1.
Boruvka’s Algorithm
Now let us see how Boruvka’s Algorithm is helpful in finding the MST of a graph.
The idea is to separate all nodes at first, then process each node one by one by connecting nodes together from different components.
For each node, we find the edge with least weight and connect them to form a component. Then we jump to the next vertex.
After this, for each component, we choose the smallest or cheapest edge so that we get disconnected components of graphs. Then we combine the graph using the above process. If any loop or cycle found we ignore those edges.
After getting all the disconnected components we try connecting them following the above steps. Each repetition of this process reduces the number of nodes, within each connected component of the graph, to at most half of this former value, so after logarithmically many repetitions the process finishes.
At the end, the weight of edges we add from the Minimum Spanning Tree.
Implementation in Java
Step 1:
We represent the Graph using a class with three fields: V, U, and Cost. V is the source vertex , U is the destination and Cost is the weight between V and U. We use two arrays Parent and Min. The Parent Array stores the parent of node and Min stores the Minimum outgoing edge for each pair (v,u). For ith node we initialize its parent value to the same node.
Step 2:
At first we set the number of Components to the number of vertices n. For each component we initialize Min as -1, indicating there is no cheapest edge. For each node in our graph if its source and end vertex are part of same component we do not process them. Otherwise, for each vertex we take their root or parent node and check if it is minimum weighted edge.
Step 3:
Then, we iterate through each component, if there is an each edge for pair (u,v) we merge them into a single component. Before merging we check if the nodes are from same component, on doing this we avoid merging two nodes into same component which might create a loop or cycle. If we are able to merge the two components we add their edge weight. We repeat these steps for each component. This makes sure all edges are visited at least once and on each iteration, we skip (log n) number of nodes.
Now let us look at the implementation of the above in Java code:
import java.util.*;
class Graph_Edge
{
int v;
int u;
int cost;
Graph_Edge(int v,int u,int cost)
{
this.v=v;
this.u=u;
this.cost=cost;
}
}
public class Boruvka_MST
{
static int parent[] = new int[7];
static int Min[] = new int[7];
public static void main(String args[])
{
// No. of vertices in graph.
int n=6;
Graph_Edge g[]=new Graph_Edge[10];
// Creating the graph with source, end and cost of each edge
g[1]=new Graph_Edge(1,2,1);
g[2]=new Graph_Edge(1,4,4);
g[3]=new Graph_Edge(2,4,7);
g[4]=new Graph_Edge(2,5,3);
g[5]=new Graph_Edge(2,6,6);
g[6]=new Graph_Edge(3,2,5);
g[7]=new Graph_Edge(3,6,9);
g[8]=new Graph_Edge(6,5,2);
g[9]=new Graph_Edge(5,4,8);
// Initializes parent of all nodes.
init(n);
int edges = g.length-1;
int components = n;
int ans_MST=0;
while(components>1)
{
// Initialize Min for each component as -1.
for(int i=1;i<=n;i++)
{
Min[i]=-1;
}
for(int i=1;i<=edges;i++)
{
// If both source and end are from same component we don't process them.
if(root(g[i].v)==root(g[i].u))
continue;
int r_v=root(g[i].v);
if(Min[r_v]==-1 || g[i].cost < g[Min[r_v.cost)
Min[r_v]=i;
int r_u=root(g[i].u);
if(Min[r_u]==-1 || g[i].cost < g[Min[r_u.cost)
Min[r_u]=i;
}
for(int i=1;i<=n;i++)
{
if(Min[i]!=-1)
{
if(merge(g[Min[i.v,g[Min[i.u))
{
ans_MST+=g[Min[i.cost;
components--;
}
}
}
}
System.out.println("The Total Weight of Minimum Spanning Tree is : "+ans_MST);
}
static int root(int v)
{
if(parent[v]==v)
return v;
return parent[v]=root(parent[v]);
}
static boolean merge(int v,int u)
{
v=root(v);
u=root(u);
if(v==u)
return false;
parent[v]=u;
return true;
}
static void init(int n)
{
for(int i=1;i<=n;i++)
{
parent[i]=i;
}
}
}
Output:
The Total Weight of Minimum Spanning Tree is : 15
Note: We take Graph array of size 10 for total no. of edges are 9 as discussed in the example above and vertices are named from 1. The same is for Parent and Min array, we take size 7 for 6 vertices.
We have implemented the code for the same example as shown above. Now let us have a quick look at the complexities.
Time Complexity: For N nodes of graph, we have E edges to check the minimum weighted edge we have to iterate through all the edges and on each iteration the total nodes to be processed decreases logarithmically. So the overall complexity is O( E * log(N) ) .
Space Complexity: We require extra space to store the Parent and Min edge with respect to each node in our graph of size equal to the total number of vertices N. So the overall complexity is O(N).
Limitation Of Boruvka’s Algorithm
We can see in the above example that we used the graph with edges having distinct weight. This is a limitation for this algorithm which requires the graph to be Edge-Weighted but with Distinct Weights. If edges do not have distinct weights, then a consistent tie-breaking rule can be used. An optimization is to remove each edge in Graph G that is found to connect two vertices in the same component as each other.
So that’s it for the article you can try out this algorithm and dry run with various examples to have a clear idea. You can also execute this code to have a better understanding.
Feel free to leave your suggestions/doubts in the comment section below.
If you are on the cusp of your career, then this article is for you. You must’ve heard about cyber security from someone somewhere.
In the modern digital world that we live in, not a single day goes by that cyber security experts are not needed. The piling number of cyber threats and cyber incidents have created a deep demand in the world for talented and qualified cyber security professionals.
In this article, we will explain the things that make cyber security a career worth pursuing and a dream worth chasing.
Cyber security professionals enjoy the thrill of working with the most advanced technologies of our times. Whether it is defending a network against malicious traffic or finding security flaws in the code of a reputed social media company, the world truly keeps you on your toes and never makes you miss adrenaline.
When you are a cyber security expert, you are not an ordinary worker. You are a rockstar and a soldier. Sure, anyone can learn an ethical hacking course online but not everyone can have the perseverance and the dedication to stay one step ahead of the hackers all the time.
Cyber security professionals are valued not just for the work that they do, but for the millions they save, the lives they protect, and the privacy of people that they keep intact. These are things that no one can put a price on, but you will be glad to know that cyber security professionals are one of the highest paid individuals in the entire information technology industry.
The reason why students are attracted to this field is that it does not just offer you a great life in the present, but also promises you a bright and shining future. Now that the world has seen what a strain of virus can do to entire economies and millions of employed professionals, we all have become a little wiser in terms of what we choose as our means of livelihood.
Did you know that during the coronavirus pandemic, cyber attacks increased in number and the demand for cyber security professionals also increased as opposed to dipping like other jobs? Therefore, it’s not surprising when young people and even mid-career professionals look at cyber security as their next best friend.
Apart from the unparalleled job security, cyber security also offers immense satisfaction for the soul for those who like to get something more than just money out of their work.
If you want to try this field out for yourself, then go fearlessly into it, knowing that anything that you want to learn is available at your fingertips. Undoubtedly you will need professional help and mentorship like a Certified Ethical Hacker or CEH certification training to become an ethical hacker or a forensic investigation course to become a computer forensic expert and so on.
It is true that cyber security is a highly technical profession and can be difficult sometimes. but the right trainers can even make the most complicated things sound simple to understand. So, go ahead and get started on your learning right now.
Make sure that you do proper research about the different branches of the field before you sign up for something. There are no limits to how much you can learn and grow in cyber security. So keep your hopes high and your mind sharp because a great opportunity lies in front of you and is calling you to make the most of it.
There is always someone in your life that needs a smile. And you definitely know who they are: a friend, cousin, parents, colleague, son, daughter, wife or maybe even you. Then what are you waiting for?
Get them something unique and nice to cheer them up again in life. We all know that life is all about ups and downs.
In these times, only our loved ones can make us feel more comfortable and better. So be that light in someone’s life and help them to cope up with. There are plenty of gift guides that will surely make them feel more happy and better.
If you are looking to treat them on their birthday or on any other gifting occasion. Particularly you don’t need any reason to show up to your loved ones with a delicious cake. You can surprise them with a sweethappy birthday cake delivery or personalised theme cake according to their choices. Customised cakes come in different designs and shapes like you can get a cake of their name or even of their photo. Bright their special day with a mouth watering caking and let your loved ones know how much you appreciate them.
Digital photo frame
We know that seeing your loved ones daily is not easy at all but this problem can be resolved by a digital photo frame. It would make a nice gift for the relatives who are far away from you right now or even just for you. One can connect this frame with their smartphone and it will automatically display the selected pictures of the user. They can control and can change the settings too, no matter how far they are from this frame. It’s a frame that is super easy to access. So gift a digital frame that is already loaded with memorable moments of your loved ones and surprise them. This is the best way to show how much you care for them.
Aroma diffuser lamp
Nothing is more attractive than a good scent. A good smell reminds us about a particular memory and occasion. If you are gifting someone a good scent means you are lighting up their life with a good fragrance. This diffuser will help their environment and surroundings always be fresh. Not only the fragrance but this diffuser brighten ups the room with colorful lights. It comes with a diffuser, essential oil and fragrance sticks. Whenever they are back from a hectic work day, they can light up this diffuser and lighten up their mood. Also it gives very welcoming vibes for everyone who wants to step in.
Wooden scrapbook
It feels so good to go back in the times of old day with just a piece of paper ( photograph ). You can gift a wooden scrapbook with all the sweet memories in it. It will make an attractive home decor item as you can display it in your bookshelf or on the dining table. This wooden scrapbook is very easy to access as one can easily remove and insert photographs, the bindings are very secure and can hold a lot of pictures in it. This beautiful scrapbook is a lovely way to remember all of those memories.
Happy book
Every once in a while life gets too hard on us and that is the time when we need some guidance and motivation to get through it. So if you know someone who is going through a rough time in life then it’s the perfect time to show up with some motivational book. Maybe this book will help them to change their negative thoughts into good ones and get the courage to stand up again in life.
So these are some amazing gift ideas and if you want to add a delicious cake with some of these gifts then use cakes online delivery option to get the best services.
In our previous tutorial, we have developed Online Exam System with PHP and MySQL. In this tutorial we will implement Expense Management System with PHP and MySQL.
Expense management systems are web based applications to manage their income and expenses. The users allowed to login system and manage their income and expenses and view the report of expenses for the range of time. This is an initial level project in which we have covered section like manage income, expenses, users etc.
Here we will develop a expense management system and cover following.
The Administrator will do the following:
Manage Income and it’s categories.
Manage Expenses and it’s categories.
View Reports
Manage Users
So let’s start developing expense management system. The major files are:
index.php
income.php
expense.php
report.php
user.php
User.php: A class contains users methods.
Income.php: A class contains methods related to income.
Expense.php: A class contains methods related to expense.
Report.php: A class contains methods related to report.
Step1: Create MySQL Database Table
We will create database table expense_users to store user login information.
we will create database table expense_income_category to store income category details.
CREATE TABLE `expense_income_category` (
`id` int(11) NOT NULL,
`name` varchar(250) NOT NULL,
`status` enum('enable','disable') NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
ALTER TABLE `expense_income_category`
ADD PRIMARY KEY (`id`);
ALTER TABLE `expense_income_category`
MODIFY `id` int(11) NOT NULL AUTO_INCREMENT, AUTO_INCREMENT=4;
we will create database table expense_income to store income details.
CREATE TABLE `expense_income` (
`id` int(11) NOT NULL,
`amount` int(11) NOT NULL,
`date` date NOT NULL,
`category_id` int(11) NOT NULL,
`user_id` int(11) NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
ALTER TABLE `expense_income`
ADD PRIMARY KEY (`id`);
ALTER TABLE `expense_income`
MODIFY `id` int(11) NOT NULL AUTO_INCREMENT, AUTO_INCREMENT=5;
we will create database table expense_category to store expense category details.
CREATE TABLE `expense_category` (
`id` int(11) NOT NULL,
`name` varchar(250) NOT NULL,
`status` enum('enable','disable') NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
ALTER TABLE `expense_category`
ADD PRIMARY KEY (`id`);
ALTER TABLE `expense_category`
MODIFY `id` int(11) NOT NULL AUTO_INCREMENT, AUTO_INCREMENT=8;
and we will create database table expense_expense to store expense details.
CREATE TABLE `expense_expense` (
`id` int(11) NOT NULL,
`amount` int(11) NOT NULL,
`date` date NOT NULL,
`category_id` int(11) NOT NULL,
`user_id` int(11) NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
ALTER TABLE `expense_expense`
ADD PRIMARY KEY (`id`);
ALTER TABLE `expense_expense`
MODIFY `id` int(11) NOT NULL AUTO_INCREMENT, AUTO_INCREMENT=4;
Step2: Manage Income
In income.php file, we will create HTML to manage income.
Wie verhalten sich Delphi, WPF .NET Framework und Electron im Vergleich zueinander und wie lässt sich ein objektiver Vergleich am besten durchführen? Embarcadero gab ein Whitepaper in Auftrag , um die Unterschiede zwischen Delphi, WPF .NET Framework und Electron beim Erstellen von Windows-Desktopanwendungen zu untersuchen. Die Benchmark-Anwendung – ein Windows 10 Calculator-Klon – wurde in jedem Framework von drei freiwilligen Mitarbeitern von Delphi Most Valuable Professionals (MVPs), einem freiberuflichen WPF-Experten und einem freiberuflichen Electron-Entwickler neu erstellt. In diesem Blog-Beitrag werden wir die IP-Sicherheitsmetrik untersuchen, die Teil des im Whitepaper verwendeten Funktionsvergleichs ist.
Was ist IP-Sicherheit in einer bereitstellbaren Anwendung?
Wie sicher ist das geistige Eigentum des Quellcodes in einem bereitstellbaren Projekt? Nachdem Unternehmen Ressourcen in ihre Projekte investiert haben, stehen sie vor der Herausforderung, ihr Produkt in die Hände der Öffentlichkeit zu legen und gleichzeitig den Code und die Techniken zu schützen, mit denen Einnahmen erzielt werden. Diese qualitative Metrik bewertet die Fähigkeit eines Benutzers, über Dekompilierung auf Quellcode zuzugreifen.
Der Schutz des geistigen Eigentums ist für langfristige Geschäftspläne von grundlegender Bedeutung. Wenn ein Produkt ein neues Problem löst oder eine neuartige Technik verwendet, sollten die Entwickler verstehen, wie sich die Wahl des Frameworks auf die IP-Schwachstelle auswirkt. Delphi-Programme werden eher in plattformeigenen Maschinencode als in Zwischencode kompiliert. Die Dekompilierung mit kostenlosen Tools kann das GUI-Formular wiederherstellen, liefert jedoch nur Assembler-Code für die Logik. Die IP-Sicherheit ist in WPF geringer. Das Dekompilieren von ausführbaren Dateien und Bibliotheksdateien mit kostenlosen Tools führt zu erkennbarer C # -Geschäftslogik und nahezu erkennbarem XAML-Text. Schließlich hat Electron das größte Problem: Standardmäßig wird bei jeder Installation Quellcode ausgegeben.Der Elektronenanwendungscode kann mit einem einfachen Texteditor wiederhergestellt werden – eine Funktion der Struktur des Frameworks -, kann jedoch mit Tools von Drittanbietern etwas verschleiert werden. Die verfügbaren Dekompiler-Tools und ihre Ergebnisse, wenn sie auf die Taschenrechneranwendung jedes Frameworks angewendet werden, sind unten aufgeführt.
Ziel dieser Dekompilierungsübung war es, die Machbarkeit des Abrufs sowohl der Benutzeroberfläche als auch des Originalcodes aus der Taschenrechneranwendung jedes Frameworks mithilfe von Open-Source- oder kostenlosen Tools zu ermitteln. Die bewerteten Frameworks waren Delphi VCL, Delphi FMX, WPF (C #) und Electron (mit Angular).
Bei der Dekompilierung der Delphi VCL- und FMX-Rechner wurden alle UI-Elemente erfolgreich extrahiert und der Logikcode als Assembly dargestellt. Diese Übung extrahierte nicht die Funktion und die Verfahrensstruktur, ist aber möglicherweise möglich.
Das Dekompilieren des WPF-Rechners ergab die UI-Elemente und den meist erkennbaren C # -Code. WPF .NET Framework-Anwendungen verwenden ein bekanntes MSIL-Format (Microsoft Intermediate Language), das leicht zu zerlegen und zu dekompilieren ist. Abhängige Baugruppen können leicht extrahiert werden. Ressourcen können leicht extrahiert werden. Mit .NET Reflection können Informationen zu einer .NET-Assembly extrahiert werden. Der gesamte Inhalt kann einschließlich der Klassen, Methoden, Codes und Ressourcen aus einer Assembly extrahiert werden. Ein erweiterter Dekompiler kann fast die genaue Struktur Ihres Codes rekonstruieren, einschließlich for / while-Schleifen, if-Anweisungen, und Catch-Blöcke ausprobieren. Literale Zeichenfolgen können leicht extrahiert werden. Schließlich können Aufrufe von Methoden und Eigenschaften für externe Assemblys extrahiert werden.
Die UI-Elemente und der JavaScript-Code des Elektronenrechners können mit einem Standard-Texteditor problemlos angezeigt werden. Der Typescript-Code wurde in JavaScript transpiliert und konnte nicht wiederhergestellt werden. Insgesamt bot die Verpackung von Electron einen sehr begrenzten Grad an Verschleierung.
Werfen wir einen Blick auf jedes Framework.
Können Delphi-Anwendungen dekompiliert werden?
Delphi kompiliert zu nativem Maschinencode und eliminiert einen Großteil der Quellcodestruktur und Metadaten, die für eine genaue Dekompilierung und Interpretation erforderlich sind. Die Dekompilierung mit einem Tool wie DeDe bietet vollständige Details zur Benutzeroberfläche, jedoch nur Assembler-Code für die Logik / das Back-End.
Dekompilierungswerkzeuge
DeDe – einer der beliebtesten Delphi-Dekompilierer.
Interaktiver Delphi-Rekonstruktor – ein Dekompiler für ausführbare Delphi-Dateien und dynamische Bibliotheken.
MiTeC DFM Editor – ein eigenständiger Editor für Delphi Form-Dateien (* .dfm) im Binär- und Textformat.
DeDe Dekompilierung von Delphi VCLDFM-Editor GUI-Code-Ansicht von Delphi VCLDFM Editor GUI Design Ansicht von Delphi VCLVon IDR generierter Delphi VCL-Baugruppencode
Können WPF .NET Framework-Anwendungen dekompiliert werden?
In eine Windows-Desktopanwendung kompiliertes WPF wird in DLL- und BAML-Dateien konvertiert. Die Dekompilierung zurück zu erkennbarem C # und nahezu perfektem XAML ist über Tools von Drittanbietern möglich. Microsoft enthält eine Community-Edition von Dotfuscator mit Visual Studio, die Lizenz ist jedoch nur für den persönlichen Gebrauch bestimmt. Professionelle Lösungen für die .NET-Verschleierung reichen von Hunderten bis zu Tausenden von Dollar. Es sind auch zusätzliche Schritte erforderlich, um eine Anwendung mit einem Verschleierungstool zu schützen.
Dekompilierungswerkzeuge
WPF StylesExplorer – ein WPF .baml-Dekompiler und ein Tool zum Erkunden von .baml-Ressourcen.
Snoop WPF – ein Tool zum Ausspähen / Durchsuchen des visuellen Baums einer laufenden WPF-Anwendung, ohne dass ein Debugger erforderlich ist.
JetBrains dotPeek – ein .NET-Dekompiler und Assembly-Browser.
dotPeek Dekompilierung der WPF-LogikdotPeek Dekompilierung der WPF-GUISnoop WPF Dekompilierung der WPF GUI
Können Elektronenanwendungen dekompiliert werden?
Der Elektronenquellcode wird gepackt und auf dem System des Endbenutzers bereitgestellt. Sofern ein Entwickler keine Tools von Drittanbietern verwendet, um Code zu verschleiern, kann der Quellcode mit einem einfachen Texteditor oder durch Entpacken mit einem Tool wie asar wörtlich gelesen werden.
Dekompilierungswerkzeuge
TextPad – ein universeller Texteditor für Klartextdateien.
asar – ein einfaches Tool zum Packen und Entpacken von unkomprimierten Verkettungsarchiven.
Textpad, das den Elektronenlogikcode anzeigtTextpad, das den Code der Elektronen-Benutzeroberfläche anzeigt
Insgesamt bietet Delphi die sichersten langfristigen Aussichten, die beste Sicherheit für geistiges Eigentum und die einfachste interne Anpassung auf Kosten eines einmaligen kommerziellen Lizenzkaufs. WPFs können in der Standardeinstellung problemlos dekompiliert werden und erfordern zusätzliche Schritte und Tools, um den Code zu verschleiern. In der Standardeinstellung kann Electron auch problemlos dekompiliert werden. Es sind zusätzliche Schritte und Tools erforderlich, um den Code zu verschleiern. Ein ungewisser langfristiger Ausblick und das Verlassen auf Unternehmenssponsoring und Community-Unterstützung für zusätzliche Entwicklung sind nachteilig.
Sind Sie bereit, alle Messdaten im Whitepaper „Ermitteln des besten Entwickler-Frameworks durch Benchmarking“ zu untersuchen?
¿Cómo funcionan Delphi, WPF .NET Framework y Electron en comparación entre sí, y cuál es la mejor manera de hacer una comparación objetiva? Embarcadero encargó un documento técnico para investigar las diferencias entre Delphi, WPF .NET Framework y Electron para crear aplicaciones de escritorio de Windows. La aplicación de referencia, un clon de la Calculadora de Windows 10, fue recreada en cada marco por tres voluntarios de Delphi Most Valuable Professionals (MVP), un desarrollador experto independiente de WPF y un desarrollador experto independiente Electron. En esta publicación de blog, vamos a explorar la métrica de seguridad IP, que es parte de la comparación de funcionalidad utilizada en el documento técnico.
¿Qué es la seguridad IP en una aplicación desplegable?
¿Qué tan segura es la propiedad intelectual del código fuente en un proyecto implementable? Después de que las empresas invierten recursos en sus proyectos, enfrentan el desafío de poner su producto en manos del público mientras protegen el código y las técnicas que generan ingresos. Esta métrica cualitativa evalúa la capacidad de un usuario para acceder al código fuente a través de la descompilación.
La protección de la propiedad intelectual es fundamentalmente importante para los planes comerciales a largo plazo. Si un producto resuelve un nuevo problema o utiliza una técnica novedosa, los desarrolladores deben comprender cómo su elección de marco afecta la vulnerabilidad de la propiedad intelectual. Los programas Delphi se compilan en código de máquina nativo de la plataforma en lugar de código intermedio. La descompilación con herramientas gratuitas puede recuperar la forma de la GUI, pero solo produce código ensamblador para la lógica. La seguridad de IP es más tenue en WPF. La descompilación de archivos ejecutables y de biblioteca con herramientas gratuitas da como resultado una lógica empresarial de C # reconocible y un texto XAML casi reconocible. Finalmente, Electron tiene el problema más importante: regala el código fuente con cada instalación de forma predeterminada.El código de la aplicación de Electron se puede recuperar con un editor de texto simple, una función de cómo está estructurado el marco, pero se puede ofuscar un poco con herramientas de terceros. Las herramientas del descompilador disponibles y sus resultados cuando se aplican a la aplicación de calculadora de cada marco se enumeran a continuación.
El objetivo de este ejercicio de descompilación fue determinar la viabilidad de recuperar tanto la interfaz de usuario como el código original de la aplicación de calculadora de cada marco utilizando herramientas de código abierto o gratuitas. Los marcos evaluados fueron Delphi VCL, Delphi FMX, WPF (C #) y Electron (con Angular).
Cuando se descompilaron las calculadoras Delphi VCL y FMX, todos los elementos de la interfaz de usuario se extrajeron con éxito y el código lógico se presentó como ensamblado. Este ejercicio no extrajo la estructura de funciones y procedimientos, pero puede ser posible.
La descompilación de la calculadora WPF produjo los elementos de la interfaz de usuario y el código C # en su mayoría reconocible. Las aplicaciones de WPF .NET Framework utilizan un formato conocido MSIL (Microsoft Intermediate Language) que es fácil de desmontar y descompilar. Los ensamblajes dependientes se pueden extraer fácilmente. Los recursos se pueden extraer fácilmente. .NET Reflection se puede utilizar para extraer información sobre un ensamblado .NET. Se puede extraer todo el contenido, incluidas las clases, los métodos, el código y los recursos de un ensamblado. Un descompilador avanzado puede reconstruir casi la estructura exacta de su código, incluidos los bucles for / while, las declaraciones if y los bloques de captura de prueba. Las cadenas literales se pueden extraer fácilmente. Finalmente, se pueden extraer llamadas a métodos y propiedades a ensamblados externos.
Los elementos de la interfaz de usuario y el código JavaScript de la calculadora Electron se exponen fácilmente mediante un editor de texto estándar. El código de TypeScript se transpiló a JavaScript y no se pudo recuperar. En general, el empaque de Electron proporcionó un nivel muy limitado de ofuscación.
Echemos un vistazo a cada marco.
¿Se pueden descompilar las aplicaciones Delphi?
Delphi compila en código de máquina nativo, eliminando gran parte de la estructura del código fuente y los metadatos necesarios para una descompilación e interpretación precisas. La descompilación con una herramienta como DeDe proporcionará detalles completos sobre la interfaz de usuario, pero solo el código de ensamblaje para la lógica / back-end.
Herramientas de descompilación
DeDe: uno de los descompiladores de Delphi más populares.
Interactive Delphi Reconstructor: un descompilador de ejecutables y bibliotecas dinámicas de Delphi.
MiTeC DFM Editor: un editor independiente para archivos de formularios Delphi (* .dfm) en formato binario y de texto.
Descompilación de Delphi VCLVista del código de la GUI del editor DFM de Delphi VCLVista de diseño de GUI del editor DFM de Delphi VCLCódigo de ensamblaje de Delphi VCL generado por IDR
¿Se pueden descompilar las aplicaciones de WPF .NET Framework?
WPF compilado en una aplicación de escritorio de Windows se convierte en archivos .dll y .baml. La descompilación de nuevo a C # reconocible y XAML casi perfecto es posible a través de herramientas de terceros. Microsoft incluye una edición comunitaria de Dotfuscator con Visual Studio, pero su licencia es solo para uso personal. Las soluciones profesionales para la ofuscación de .NET oscilan entre cientos y miles de dólares. También hay pasos adicionales involucrados para proteger una aplicación con una herramienta de ofuscación.
Herramientas de descompilación
WPF StylesExplorer: un descompilador de .baml de WPF y una herramienta para explorar recursos .baml.
Snoop WPF: una herramienta para espiar / explorar el árbol visual de una aplicación WPF en ejecución sin la necesidad de un depurador.
JetBrains dotPeek: un descompilador de .NET y un navegador de ensamblaje.
dotPeek Descompilación de WPF LogicdotPeek Descompilación de la GUI de WPFDescompilación de Snoop WPF de la GUI de WPF
¿Se pueden descompilar las aplicaciones de Electron?
El código fuente de Electron se empaqueta y se implementa en el sistema del usuario final. A menos que un desarrollador use herramientas de terceros para ofuscar el código, el código fuente se puede leer textualmente usando un editor de texto simple o descomprimiéndolo con una herramienta como asar.
Herramientas de descompilación
TextPad: un editor de texto de uso general para archivos de texto sin formato.
asar: una sencilla herramienta de empaquetado y desempaquetado de archivos con formato de archivo de concatenación sin comprimir.
Textpad que muestra el código de lógica electrónicaTextpad que muestra el código de IU de Electron
En general, Delphi ofrece la perspectiva a largo plazo más segura, la mejor seguridad de propiedad intelectual y la personalización interna más sencilla al costo de una única compra de licencia comercial. Los WPF se pueden descompilar con facilidad en su configuración predeterminada y requieren pasos y herramientas adicionales para ofuscar su código. Electron también se puede descompilar con facilidad en su configuración predeterminada. Requiere pasos y herramientas adicionales para ofuscar el código. Una perspectiva incierta a largo plazo y depender de patrocinios corporativos y el apoyo de la comunidad para un desarrollo adicional son perjudiciales.
¿Está listo para explorar todas las métricas del documento técnico “Descubriendo el mejor marco para desarrolladores a través de la evaluación comparativa”?
Qual é o desempenho do Delphi, do WPF .NET Framework e do Electron em comparação entre si, e qual é a melhor maneira de fazer uma comparação objetiva? A Embarcadero encomendou um white paper para investigar as diferenças entre Delphi, WPF .NET Framework e Electron para a construção de aplicativos de desktop do Windows. O aplicativo de referência – um clone da Calculadora do Windows 10 – foi recriado em cada estrutura por três voluntários Delphi Most Valuable Professionals (MVPs), um desenvolvedor WPF freelance especialista e um desenvolvedor freelance Electron especialista. Nesta postagem do blog, vamos explorar a métrica de segurança IP, que faz parte da comparação de funcionalidade usada no white paper.
O que é segurança IP em um aplicativo implantável?
Quão segura é a propriedade intelectual do código-fonte em um projeto implantável? Depois que as empresas investem recursos em seus projetos, elas enfrentam o desafio de colocar seus produtos nas mãos do público e, ao mesmo tempo, proteger o código e as técnicas que geram receita. Essa métrica qualitativa avalia a capacidade de um usuário de acessar o código-fonte por meio de descompilação.
A proteção da propriedade intelectual é fundamentalmente importante para os planos de negócios de longo prazo. Se um produto resolve um novo problema ou utiliza uma nova técnica, os desenvolvedores devem entender como sua escolha de estrutura afeta a vulnerabilidade de IP. Os programas Delphi são compilados em código de máquina nativo da plataforma, em vez de código intermediário. A descompilação usando ferramentas gratuitas pode recuperar a forma da GUI, mas apenas produz o código de montagem para a lógica. A segurança IP é mais tênue no WPF. A descompilação de arquivos executáveis e de biblioteca com ferramentas gratuitas resulta em lógica de negócios C # reconhecível e texto XAML quase reconhecível. Finalmente, o Electron tem o problema mais significativo – ele distribui o código-fonte com cada instalação por padrão.O código do aplicativo Electron pode ser recuperado com um editor de texto simples – uma função de como a estrutura é estruturada – mas pode ser um tanto ofuscado usando ferramentas de terceiros. As ferramentas de descompilação disponíveis e seus resultados quando aplicados a cada aplicativo de calculadora da estrutura estão listados abaixo.
O objetivo deste exercício decompilation foi determinar a viabilidade de recuperar tanto a interface do usuário eo código original do aplicativo de cálculo de cada quadro usando open-source ou ferramentas livres. Os frameworks avaliados foram Delphi VCL, Delphi FMX, WPF (C #) e Electron (com Angular).
Quando as calculadoras Delphi VCL e FMX foram descompiladas, todos os elementos da IU foram extraídos com sucesso e o código lógico foi apresentado como montagem. Este exercício não extraiu a função e a estrutura do procedimento, mas pode ser possível.
Decompiling the WPF calculator yielded the UI elements and mostly recognizable C# code. WPF .NET Framework applications use a known MSIL (Microsoft Intermediate Language) format that is easy to disassemble and decompile. Dependent assemblies can easily be extracted. Resources can easily be extracted. .NET Reflection can be used to extract information about a .NET assembly. The entire contents can be extracted including the classes, methods, code, and resources from an assembly. An advanced decompiler can reconstruct almost the exact structure of your code including for/while loops, if statements, and try catch blocks. Literal strings can easily be extracted. Finally, calls to methods and properties to external assemblies can be extracted.
Os elementos da IU e o código JavaScript da calculadora Electron são facilmente expostos usando um editor de texto padrão. O código Typescript foi transpilado para JavaScript e não pôde ser recuperado. No geral, a embalagem do Electron forneceu um nível muito limitado de ofuscação.
Vamos dar uma olhada em cada estrutura.
Os aplicativos Delphi podem ser descompilados?
O Delphi compila para o código de máquina nativo, eliminando muito da estrutura do código-fonte e dos metadados necessários para uma descompilação e interpretação precisas. A descompilação usando uma ferramenta como o DeDe fornecerá detalhes completos sobre a IU, mas apenas o código de montagem para a lógica / back-end.
Ferramentas de descompilação
DeDe – um dos mais populares descompiladores Delphi.
Delphi Reconstructor interativo – um descompilador para executáveis Delphi e bibliotecas dinâmicas.
MiTeC DFM Editor – um editor independente para arquivos Delphi Form (* .dfm) em formato binário e de texto.
DeDe Decompilation of Delphi VCLEditor DFM Visualização do código GUI do Delphi VCLDFM Editor GUI Design View do Delphi VCLDelphi VCL Assembly Code gerado por IDR
Os aplicativos WPF .NET Framework podem ser descompilados?
O WPF compilado em um aplicativo da área de trabalho do Windows é convertido em arquivos .dll e .baml. A descompilação de volta para C # reconhecível e XAML quase perfeito é possível por meio de ferramentas de terceiros. A Microsoft inclui uma edição comunitária do Dotfuscator com Visual Studio, mas sua licença é apenas para uso pessoal. As soluções profissionais para ofuscação .NET variam de centenas a milhares de dólares. Existem também etapas extras envolvidas para proteger um aplicativo com uma ferramenta de ofuscação.
Ferramentas de descompilação
WPF StylesExplorer – um descompilador WPF .baml e ferramenta para explorar recursos .baml.
Snoop WPF – uma ferramenta para espiar / navegar na árvore visual de um aplicativo WPF em execução sem a necessidade de um depurador.
JetBrains dotPeek – um decompilador .NET e navegador de montagem.
dotPeek Descompilação da lógica WPFdotPeek Descompilação de WPF GUISnoop WPF Decompilation of WPF GUI
Os aplicativos Electron podem ser descompilados?
O código-fonte do elétron é empacotado e implantado no sistema do usuário final. A menos que um desenvolvedor use ferramentas de terceiros para ofuscar o código, o código-fonte pode ser lido literalmente usando um editor de texto simples ou desempacotando com uma ferramenta como asar.
Ferramentas de descompilação
TextPad – um editor de texto de propósito geral para arquivos de texto simples.
asar – uma ferramenta simples de compactação e descompactação de formato de arquivo de concatenação de arquivo simples.
Textpad Exibindo Código Lógico EletrônicoTextpad Exibindo o Código Electron da IU
No geral, a Delphi oferece a perspectiva de longo prazo mais garantida, a melhor segurança de propriedade intelectual e a personalização interna mais fácil ao custo de uma compra única de licença comercial. Os WPFs podem ser descompilados com facilidade em sua configuração padrão e requer etapas e ferramentas extras para ofuscar seu código. Electron também pode ser descompilado com facilidade em sua configuração padrão. Requer etapas e ferramentas extras para ofuscar o código. Uma perspectiva incerta de longo prazo e a dependência de patrocínios corporativos e apoio da comunidade para desenvolvimento adicional são prejudiciais.
Pronto para explorar todas as métricas do white paper “Descobrindo a melhor estrutura de desenvolvedor por meio de benchmarking”?
Как работают Delphi, WPF .NET Framework и Electron по сравнению друг с другом и как лучше всего провести объективное сравнение? Embarcadero заказал технический документ для исследования различий между Delphi, WPF .NET Framework и Electron для создания настольных приложений Windows. Тестовое приложение — клон калькулятора Windows 10 — было воссоздано в каждой структуре тремя волонтерами Delphi Most Valuable Professionals (MVP), одним экспертом-фрилансером WPF-разработчиком и одним экспертом-фрилансером Electron. В этом сообщении блога мы собираемся изучить метрику IP-безопасности, которая является частью сравнения функциональности, используемого в техническом документе.
Что такое IP-безопасность в развертываемом приложении?
Насколько защищена интеллектуальная собственность исходного кода в развертываемом проекте? После того, как компании вкладывают ресурсы в свои проекты, они сталкиваются с проблемой: передать свой продукт в руки общественности, одновременно защищая код и методы, приносящие доход. Этот качественный показатель оценивает возможность доступа пользователя к исходному коду посредством декомпиляции.
Защита интеллектуальной собственности принципиально важна для долгосрочных бизнес-планов. Если продукт решает новую проблему или использует новую технику, разработчики должны понимать, как их выбор фреймворка влияет на уязвимость IP. Программы Delphi компилируются в машинный код платформы, а не в промежуточный код. Декомпиляция с использованием бесплатных инструментов может восстановить форму графического интерфейса пользователя, но дает только ассемблерный код для логики. IP-безопасность в WPF более хрупкая. Декомпиляция исполняемых файлов и файлов библиотеки с помощью бесплатных инструментов приводит к узнаваемой бизнес-логике C # и почти узнаваемому тексту XAML. Наконец, у Electron есть самая серьезная проблема — он по умолчанию выдает исходный код при каждой установке.Код приложения Electron можно восстановить с помощью простого текстового редактора — в зависимости от структуры фреймворка — но его можно несколько запутать с помощью сторонних инструментов. Доступные инструменты декомпиляции и их результаты при применении к калькулятору каждого фреймворка перечислены ниже.
Целью этого упражнения по декомпиляции было определение возможности получения как пользовательского интерфейса, так и исходного кода из калькулятора каждого фреймворка с помощью инструментов с открытым исходным кодом или бесплатных инструментов. Оценивались фреймворки Delphi VCL, Delphi FMX, WPF (C #) и Electron (с Angular).
Когда калькуляторы Delphi VCL и FMX были декомпилированы, все элементы пользовательского интерфейса были успешно извлечены, и логический код был представлен в виде сборки. В этом упражнении не были извлечены функции и структура процедуры, но это возможно.
Декомпиляция калькулятора WPF привела к появлению элементов пользовательского интерфейса и в основном узнаваемого кода C #. Приложения WPF .NET Framework используют известный формат MSIL (Microsoft Intermediate Language), который легко дизассемблировать и декомпилировать. Зависимые сборки можно легко извлечь. Ресурсы легко добываются. .NET Reflection может использоваться для извлечения информации о сборке .NET. Все содержимое может быть извлечено из сборки, включая классы, методы, код и ресурсы. Продвинутый декомпилятор может восстановить почти точную структуру вашего кода, включая циклы for / while, операторы if и блоки try catch. Буквальные строки могут быть легко извлечены. Наконец, можно извлечь вызовы методов и свойств внешних сборок.
Элементы пользовательского интерфейса и код JavaScript калькулятора Electron легко доступны с помощью стандартного текстового редактора. Код Typescript был перенесен в JavaScript и не мог быть восстановлен. В целом упаковка Electron обеспечивала очень ограниченный уровень запутывания.
Давайте посмотрим на каждый фреймворк.
Можно ли декомпилировать приложения Delphi?
Delphi компилируется в собственный машинный код, устраняя большую часть структуры исходного кода и метаданных, необходимых для точной декомпиляции и интерпретации. Декомпиляция с использованием такого инструмента, как DeDe, предоставит полную информацию о пользовательском интерфейсе, но только код сборки для логики / серверной части.
Инструменты декомпиляции
DeDe — один из самых популярных декомпиляторов Delphi.
Interactive Delphi Reconstructor — декомпилятор для исполняемых файлов Delphi и динамических библиотек.
MiTeC DFM Editor — автономный редактор файлов Delphi Form (* .dfm) как в двоичном, так и в текстовом формате.
DeDe Декомпиляция Delphi VCLПросмотр кода графического интерфейса пользователя редактора DFM в Delphi VCLВид дизайна графического интерфейса редактора DFM для Delphi VCLКод сборки Delphi VCL, созданный IDR
Можно ли декомпилировать приложения WPF .NET Framework?
WPF, скомпилированный в классическое приложение Windows, преобразуется в файлы .dll и .baml. Декомпиляция обратно в узнаваемый C # и почти идеальный XAML возможна с помощью сторонних инструментов. Microsoft включает версию сообщества Dotfuscator с Visual Studio, но ее лицензия предназначена только для личного использования. Стоимость профессиональных решений для обфускации .NET варьируется от сотен до тысяч долларов. Есть также дополнительные шаги для защиты приложения с помощью инструмента запутывания.
Инструменты декомпиляции
WPF StylesExplorer — декомпилятор WPF .baml и инструмент для изучения ресурсов .baml.
Snoop WPF — инструмент для отслеживания / просмотра визуального дерева работающего приложения WPF без использования отладчика.
JetBrains dotPeek — декомпилятор .NET и обозреватель сборки.
Исходный код Electron упаковывается и развертывается в системе конечного пользователя. Если разработчик не использует сторонние инструменты для обфускации кода, исходный код можно прочитать дословно с помощью простого текстового редактора или распаковав с помощью такого инструмента, как asar.
Инструменты декомпиляции
TextPad — текстовый редактор общего назначения для текстовых файлов.
asar — простой инструмент для упаковки и распаковки несжатых файлов формата архива.
В целом, Delphi обеспечивает самые надежные долгосрочные перспективы, лучшую защиту интеллектуальной собственности и самую простую внутреннюю настройку за счет единовременной покупки коммерческой лицензии. WPF можно легко декомпилировать в настройках по умолчанию, и для этого требуются дополнительные шаги и инструменты для обфускации кода. Electron также можно легко декомпилировать в настройках по умолчанию. Для обфускации кода требуются дополнительные шаги и инструменты. Неопределенная долгосрочная перспектива и опора на корпоративное спонсорство и поддержку сообщества для дальнейшего развития пагубны.
Готовы изучить все показатели в техническом документе «Обнаружение лучшей среды разработки с помощью сравнительного анализа»?
Es ist 50 Jahre Pascal-Sprache und Delphi ist sein Erbe, der Pascal-Entwickler in den heutigen komplexen Szenarien befähigt, obwohl er vom Erfinder der Pascal-Sprache ignoriert wird
Niklaus Wirth veröffentlichte im März 1971 die Zeitung „ Die Programmiersprache Pascal “, was bedeutet, dass es genau 50 Jahre her ist, seit die Programmiersprache Pascal offiziell eingeführt wurde.
Die renommierten Informatiker feierten das Jubiläum mit einem sehr interessanten Standpunktartikel für Communications of the ACM (März 2021, Band 64 Nr. 3, Seiten 39-41) mit dem Titel 50 Years of Pascal .
Der Artikel ist vollständig lesenswert und ich empfehle Ihnen, ihn zu lesen, bevor Sie mit diesem Blog-Beitrag fortfahren. Ich werde hier warten … Fertig? OK, gut, hier sind meine Kommentare.
Die erste historische Anzeige für Borland Turbo Pascal
Verwurzelt auf Typensicherheit
Ich möchte zunächst erwähnen, dass es keinen Grund gibt, sich vor dem in Delphi lebenden Pascal-Erbe zu scheuen. Pascal war eine der erfolgreichsten Programmiersprachen aller Zeiten und brachte Konzepte wie Typensicherheit und einen Fokus auf die Lesbarkeit und Wartbarkeit von Code auf den Tisch, die heute die Grundprinzipien jeder Programmiersprache sind.
Wie Wirth über die Schlüsselideen von Pascal schreibt, „waren Datentypen und -strukturen eine bedeutende Erweiterung… am wichtigsten war das allgegenwärtige Konzept des Datentyps… Dies trug zur Erkennung von Fehlern bei, und dies vor der Ausführung des Programms“. In einer Welt dynamischer Sprachen bleibt dies eine Schlüsselidee und ein Unterscheidungsmerkmal (und ein Grund dafür, dass sicherere Sprachen wie TypeScript existieren).
Borland sorgte für Furore
Während Pascal an den Universitäten schnell Akzeptanz fand, dauerte es noch einige Jahre (ab 1983), bis es zum Mainstream wurde. Wie Wirth schreibt:
„Philippe Kahn von Borland Inc. in Santa Cruz hat unseren Compiler mit einem einfachen Betriebssystem, einem Texteditor und Routinen zur Fehlererkennung und -diagnose umgeben. Sie verkauften dieses Paket für 50 Dollar auf Disketten (Turbo Pascal). Dadurch verbreitete sich Pascal sofort, insbesondere in Schulen, und wurde für viele zum Einstieg in die Programmierung und Informatik. “
Ein sehr schneller Compiler war ein wichtiger Grundsatz von Turbo Pascal (und dies gilt auch heute noch für Delphi), zusammen mit einem erschwinglichen Preis. Und zu der Zeit, als DOS zum Mainstream wurde, war Turbo Pascal so viel leistungsfähiger als das integrierte Visual Basic.
Ein frühes Turbo Pascal Handbuch
Akademische Nachfolger… die Branche ignorieren
Im letzten Teil des Artikels geht Wirth ausführlich auf alle Sprachen ein, die dem ursprünglichen Pascal folgten, beginnend mit Modula-2 (das mit Turbo Pascal den Begriff der Kompilierungsmodule oder -einheiten teilt, wie wir sie auch heute noch nennen).
Ab diesem Punkt konzentriert sich der Artikel auf Oberon, eine sehr schöne objektorientierte Erweiterung des Pascal-Datentypsystems, die jedoch im Vergleich zu Apples Object Pascal und (insbesondere) Delphi nur sehr begrenzten Erfolg hatte.
Wirth schreibt: „Oberon ist bis heute vielerorts erfolgreich im Einsatz. Ein Durchbruch wie der von Pascal ist jedoch nicht eingetreten. “ Obwohl Oberon zwar kein Durchbruch war, kann er nicht berücksichtigen, dass eine andere objektorientierte Erweiterung von Pascal, Delphi, Ende der 90er Jahre eine große Popularität hatte, vergleichbar mit der von Turbo Pascal in den frühen Tagen. Obwohl er formal korrekt ist, dass akademische Versionen von Pascal wie Oberon nur begrenzten Erfolg hatten, ist nichts mit dem Erfolg der vielen Dialekte von Object Pascal in der Branche vergleichbar, einschließlich, aber nicht beschränkt auf Delphi.
Heute ist Delphi im Vergleich zu Oberon und jeder anderen von Pascal abgeleiteten Sprache immer noch äußerst erfolgreich und bleibt nach den meisten Quellen eine der 20 am häufigsten verwendeten Programmiersprachen. Ich bin mir nicht sicher, ob Wirth Delphi in seiner Geschichte von Pascal absichtlich ignoriert hat. Es ist klar, dass er beschlossen hat, sich nur auf seinen akademischen Weg zu konzentrieren, seine Reise, um die perfekte Pascal-Sprache zu erreichen („Die Sequenz Pascal-Modula-Oberon ist Zeuge meiner Versuche, dies zu erreichen.“). Einer der Gründe, warum er stolz auf Pascal sein sollte, ist die Tatsache, dass von Pascal abgeleitete Sprachen heute in der Branche aktiv verwendet werden. Delphi zu ignorieren scheint mir eine krasse Auslassung zu sein.
Pascal wird aufgrund von Delphi heute noch weitgehend in der IT-Welt eingesetzt, und seine Auswirkungen auf die gesamte Branche sind nach wie vor stark. Als Wirth behauptet, dass „viele dieser Sprachen wie Java (Sun Microsystems) und C # (Microsoft) stark von Oberon oder Pascal beeinflusst wurden“, übersieht er die Tatsache, dass Delphi mehr als der ursprüngliche Pascal oder Oberon Einfluss hatte C # über die Ideen von Anders Hejlsberg , aber auch über Java durch die Zusammenarbeit von Borland und Sun beim Konzept der Immobilien.
Die Delphi IDE heute
Pascal lebt in Delphi
Es ist wieder großartig, 50 Jahre Pascal zu feiern, eine bemerkenswerte Sprache, die unsere Branche tief beeinflusst hat. Aber es ist noch schöner, es zusammen mit Delphis 26-jährigem Jubiläum und nach unserer 10.4.2-Version zu feiern, die eine beispiellose Unterstützung für die Windows 10-Client-Entwicklung bietet (eine der besten in der Branche), einen noch schnelleren Compiler, der über Millionen von Computern hinweggehen kann Zeilen mit Pascal-basiertem Code in wenigen Minuten und die einzigartige Möglichkeit, viele Betriebssysteme (Windows, Linux, MacOS, Android, iOS) mit demselben Quellcode einschließlich der Benutzeroberfläche anzusprechen.
Delphi rockt immer noch die Welt, daher danken wir Wirth, Hejlsberg und Kahn ganz herzlich – aber auch den Entwicklern und Managern, die Delphi über die Jahre am Leben erhalten haben, und dem großartigen Team, das heute daran arbeitet.
Son 50 años del lenguaje Pascal y Delphi es su heredero, empoderando a los desarrolladores de Pascal en los complejos escenarios actuales, a pesar de ser ignorados por el inventor del lenguaje Pascal.
Niklaus Wirth publicó el artículo ” El lenguaje de programación Pascal ” en marzo de 1971, lo que significa que han pasado exactamente 50 años este mes desde que se lanzó oficialmente el lenguaje de programación Pascal.
Los célebres informáticos celebraron el aniversario escribiendo un muy interesante artículo de punto de vista para Comunicaciones de la ACM (marzo de 2021, Vol. 64 No. 3, Páginas 39-41) y titulado 50 Años de Pascal .
Vale la pena leer el artículo y le sugiero que lo lea antes de continuar con esta publicación de blog. Esperaré aquí… ¿Listo? OK, bien, aquí están mis comentarios.
El primer anuncio histórico de Borland Turbo Pascal
Arraigado en la seguridad del tipo
Quiero comenzar mencionando que no hay razón para ser tímido con la herencia de Pascal que vive en Delphi. Pascal ha sido uno de los lenguajes de programación más exitosos de todos los tiempos y trajo a la mesa conceptos como seguridad de tipos y un enfoque en la legibilidad y mantenibilidad del código que son principios básicos de cualquier lenguaje de programación hoy en día.
Como escribe Wirth sobre las ideas clave de Pascal, “una extensión significativa fueron los tipos y estructuras de datos … lo más esencial fue el concepto generalizado de tipo de datos … Esto contribuyó a la detección de errores, y esto antes de la ejecución del programa”. En un mundo de lenguajes dinámicos, esto sigue siendo una idea clave y un diferenciador (y una razón para que existan lenguajes más seguros como TypeScript).
Borland hizo el chapoteo
Si bien Pascal ganó rápidamente la aceptación en las universidades, tomó algunos años más (a partir de 1983) para que se generalizara. Como escribe Wirth:
“Philippe Kahn de Borland Inc. en Santa Cruz rodeó nuestro compilador con un sistema operativo simple, un editor de texto y rutinas para el descubrimiento y diagnóstico de errores. Vendieron este paquete por $ 50 en disquetes (Turbo Pascal). Por lo tanto, Pascal se extendió de inmediato, particularmente en las escuelas, y se convirtió para muchos en el punto de entrada a la programación y la informática “.
Tener un compilador muy rápido era un principio clave de Turbo Pascal (y esto sigue siendo cierto para Delphi hoy), junto con un precio asequible. Y en el momento en que DOS se convirtió en la corriente principal, Turbo Pascal era mucho más poderoso que el Visual Basic integrado.
Un manual de Turbo Pascal temprano
Sucesores académicos … ignorando la industria
En la última parte del artículo, Wirth profundiza en cubrir todos los lenguajes que siguieron al Pascal original, comenzando con Modula-2 (que comparte con Turbo Pascal la noción de módulos o unidades de compilación, como los llamamos incluso hoy).
Desde este punto, el artículo se centra en Oberon, una extensión orientada a objetos muy agradable del sistema de tipos de datos Pascal, pero que tuvo un éxito muy limitado en comparación con Object Pascal de Apple y (más notablemente) Delphi.
Wirth escribe: “Incluso hoy en día, Oberon se utiliza con éxito en muchos lugares. Sin embargo, no se produjo un avance como el de Pascal “. Si bien es cierto que Oberon no fue un gran avance, no considera que una extensión diferente orientada a objetos de Pascal, Delphi, tuvo una gran popularidad a finales de los 90, comparable a la de Turbo Pascal en los primeros días. Entonces, aunque formalmente tiene razón en que las versiones académicas de Pascal como Oberon tuvieron un éxito limitado, nada se compara con el éxito de los muchos dialectos de Object Pascal en la industria, incluidos, entre otros, Delphi.
Hoy en día, Delphi sigue teniendo un gran éxito en comparación con Oberon y cualquier otro lenguaje derivado de Pascal y sigue siendo uno de los 20 lenguajes de programación más utilizados, según la mayoría de las fuentes. No estoy seguro de si Wirth eligió deliberadamente ignorar a Delphi en su historia de Pascal. Está claro que decidió enfocarse solo en su ruta académica, su viaje para lograr el perfecto lenguaje Pascal (“La secuencia Pascal-Modula-Oberon es testigo de mis intentos por lograrlo”). Sin embargo, una de las razones por las que debería estar orgulloso de Pascal es el hecho de que los lenguajes derivados de Pascal se utilizan activamente en la industria actual. Ignorar a Delphi me parece una omisión flagrante.
Pascal todavía se utiliza en gran medida en el mundo de las tecnologías de la información debido a Delphi en la actualidad y su impacto en la industria en general sigue siendo poderoso. Cuando Wirth afirma que “muchos de esos lenguajes, como Java (Sun Microsystems) y C # (Microsoft) han sido fuertemente influenciados por Oberon o Pascal”, se olvida del hecho de que fue Delphi, más que el Pascal o Oberon original, quien tuvo influencia en C # a través de las ideas de Anders Hejlsberg , pero también en Java a través de la colaboración de Borland y Sun sobre el concepto de propiedades.
El IDE de Delphi hoy
Pascal está vivo en Delphi
Una vez más, es genial celebrar los 50 años de Pascal, un lenguaje extraordinario que influyó profundamente en nuestra industria. Pero es aún mejor celebrarlo junto con el 26 aniversario de Delphi y después de nuestra versión 10.4.2 que brinda un soporte incomparable para el desarrollo de clientes de Windows 10 (uno de los mejores de la industria), un compilador aún más rápido capaz de revisar millones de líneas de código basado en Pascal en minutos, y la capacidad única de apuntar a muchos sistemas operativos (Windows, Linux, macOS, Android, iOS) con el mismo código fuente, incluida la interfaz de usuario.
Delphi todavía está sacudiendo al mundo, por lo que debemos un gran agradecimiento a Wirth, Hejlsberg y Kahn, pero también a los desarrolladores y gerentes que mantuvieron a Delphi vivo y funcionando a lo largo de los años y al gran equipo que trabaja en él hoy.
Huevo de Pascua de celebración de los 25 años de Delfos
São 50 anos da linguagem Pascal e Delphi é seu herdeiro, capacitando os desenvolvedores Pascal nos cenários complexos de hoje, apesar de ser ignorado pelo inventor da linguagem Pascal
Niklaus Wirth publicou o artigo “ A linguagem de programação Pascal ” em março de 1971, o que significa que se passaram exatamente 50 anos neste mês desde o lançamento oficial da linguagem de programação Pascal.
Os renomados cientistas da computação comemoraram o aniversário escrevendo um artigo de ponto de vista muito interessante para Communications of the ACM (março de 2021, Vol. 64 No. 3, Páginas 39-41) e intitulado 50 Years of Pascal .
Vale a pena ler o artigo e sugiro que você o leia antes de continuar com esta postagem no blog. Vou esperar aqui … Feito? OK, bom, aqui estão meus comentários.
O primeiro anúncio histórico do Borland Turbo Pascal
Enraizado na segurança de tipo
Quero começar mencionando que não há razão para se envergonhar da herança Pascal que vive em Delphi. Pascal tem sido uma das linguagens de programação mais bem-sucedidas de todos os tempos e trouxe para a mesa conceitos como segurança de tipo e foco na legibilidade e manutenção do código que são os princípios básicos de qualquer linguagem de programação hoje.
Como Wirth escreve sobre as ideias-chave de Pascal, “uma extensão significativa eram os tipos e estruturas de dados … o mais essencial era o conceito difundido de tipo de dados … Isso contribuiu para a detecção de erros, e isso antes da execução do programa”. Em um mundo de linguagens dinâmicas, isso continua sendo uma ideia-chave e um diferencial (e uma razão para a existência de linguagens mais seguras como o TypeScript).
Borland fez o respingo
Embora Pascal rapidamente ganhasse aceitação nas universidades, demorou mais alguns anos (começando em 1983) para que se tornasse o mainstream. Como Wirth escreve:
“Philippe Kahn da Borland Inc. em Santa Cruz cercou nosso compilador com um sistema operacional simples, um editor de texto e rotinas para descoberta de erros e diagnósticos. Eles venderam este pacote por $ 50 em disquetes (Turbo Pascal). Assim, Pascal se espalhou imediatamente, especialmente nas escolas, e se tornou o ponto de entrada para muitos na programação e ciência da computação ”.
Ter um compilador muito rápido era um princípio fundamental do Turbo Pascal (e isso ainda é verdade para o Delphi hoje), junto com um preço acessível. E na época em que o DOS se tornou popular, o Turbo Pascal era muito mais poderoso do que o Visual Basic integrado.
Um manual antigo do Turbo Pascal
Sucessores acadêmicos … ignorando a indústria
Na última parte do artigo, Wirth aborda detalhadamente todas as linguagens que seguiram o Pascal original, começando com Modula-2 (que compartilha com Turbo Pascal a noção de módulos ou unidades de compilação, como os chamamos ainda hoje).
A partir deste ponto, o artigo se concentra em Oberon, uma extensão orientada a objetos muito boa do sistema de tipo de dados Pascal, mas que teve um sucesso muito limitado em comparação com Object Pascal da Apple e (mais notavelmente) Delphi.
Wirth escreve: “Ainda hoje Oberon é usado com sucesso em muitos lugares. Um avanço como o de Pascal, no entanto, não ocorreu. ” Embora seja verdade que Oberon não foi um avanço, ele falha em considerar que uma extensão orientada a objetos diferente de Pascal, Delphi, teve uma enorme popularidade no final dos anos 90, comparável ao de Turbo Pascal nos primeiros dias. Portanto, embora ele esteja formalmente correto de que as versões acadêmicas de Pascal como Oberon tiveram sucesso limitado, nada se compara ao sucesso de muitos dialetos Pascal de Objeto na indústria, incluindo, mas não se limitando a Delphi.
Hoje, o Delphi ainda é extremamente bem-sucedido em comparação com Oberon e qualquer outra linguagem derivada do Pascal e continua sendo uma das 20 linguagens de programação mais usadas, de acordo com a maioria das fontes. Não tenho certeza se Wirth escolheu deliberadamente ignorar Delphi em sua história de Pascal. É claro que ele decidiu se concentrar apenas em sua rota acadêmica, sua jornada para alcançar a linguagem Pascal perfeita (“A sequência Pascal – Modula – Oberon é testemunha de minhas tentativas de alcançá-la.”). No entanto, uma das razões pelas quais ele deve se orgulhar de Pascal é o fato de que as linguagens derivadas de Pascal são ativamente usadas na indústria hoje. Ignorar Delphi parece uma omissão gritante para mim.
Pascal ainda é amplamente utilizado no mundo de TI devido ao Delphi hoje e seu impacto na indústria em geral continua poderoso. Quando Wirth afirma que “muitas dessas linguagens, como Java (Sun Microsystems) e C # (Microsoft) foram fortemente influenciadas por Oberon ou Pascal”, ele perde o fato de que foi Delphi, mais do que Pascal ou Oberon original, para ter influência sobre C # por meio das idéias de Anders Hejlsberg , mas também em Java por meio da colaboração de Borland e Sun no conceito de propriedades.
O IDE Delphi hoje
Pascal está vivo em Delphi
Mais uma vez, é ótimo comemorar os 50 anos de Pascal, uma linguagem notável que influenciou profundamente nosso setor. Mas é ainda mais agradável celebrá-lo junto com o 26º aniversário da Delphi e após nosso lançamento 10.4.2 que traz um suporte incomparável para o desenvolvimento do cliente Windows 10 (um dos melhores da indústria), um compilador ainda mais rápido capaz de ultrapassar milhões de linhas de código baseado em Pascal em minutos e a capacidade única de direcionar muitos sistemas operacionais (Windows, Linux, macOS, Android, iOS) com o mesmo código-fonte, incluindo a interface do usuário.
A Delphi ainda está balançando o mundo, então devemos um grande agradecimento a Wirth, Hejlsberg e Kahn – mas também aos desenvolvedores e gerentes que mantiveram a Delphi viva e chutando ao longo dos anos e à grande equipe que trabalha nisso hoje.
Языку Pascal 50 лет, и Delphi является его наследником, расширяя возможности разработчиков Pascal в современных сложных сценариях, несмотря на то, что его игнорирует изобретатель языка Pascal.
Никлаус Вирт опубликовал статью « Язык программирования Паскаль » в марте 1971 года, что означает, что в этом месяце исполнилось ровно 50 лет с момента официального запуска языка программирования Паскаль.
Известные компьютерные ученые отметили годовщину, написав очень интересную обзорную статью для «Коммуникации ACM» (март 2021 г., том 64 № 3, страницы 39-41) под названием « 50 лет Паскаля» .
Эту статью стоит прочитать, и я предлагаю вам просмотреть ее, прежде чем продолжить эту публикацию в блоге. Я подожду здесь… Готово? Хорошо, вот мои комментарии.
Первая историческая реклама Borland Turbo Pascal
Основан на безопасности типов
Я хочу начать с упоминания о том, что нет причин стесняться наследования Паскаля, которое существует в Delphi. Паскаль был одним из самых успешных языков программирования, когда-либо существовавших, и он представил такие концепции, как безопасность типов и акцент на удобочитаемости и ремонтопригодности кода, которые сегодня являются основными принципами любого языка программирования.
Как пишет Вирт о ключевых идеях Паскаля, «существенным расширением были типы данных и структуры… наиболее важным было широко распространенное понятие типа данных… Это способствовало обнаружению ошибок, и это происходило до выполнения программы». В мире динамических языков это остается ключевой идеей и отличительным признаком (и причиной существования более безопасных языков, таких как TypeScript).
Borland произвел фурор
Хотя Паскаль быстро получил признание в университетах, потребовалось еще несколько лет (начиная с 1983 г.), чтобы он стал мейнстримом. Как пишет Вирт:
«Филипп Кан из Borland Inc. в Санта-Круз окружил наш компилятор простой операционной системой, текстовым редактором и подпрограммами для обнаружения и диагностики ошибок. Они продавали этот пакет за 50 долларов на дискетах (Turbo Pascal). Таким образом, Паскаль сразу же распространился, особенно в школах, и стал для многих отправной точкой в программировании и информатике ».
Наличие очень быстрого компилятора было ключевым принципом Turbo Pascal (и это все еще актуально для Delphi сегодня), наряду с доступной ценой. И когда DOS стала мейнстримом, Turbo Pascal был намного мощнее встроенного Visual Basic.
Раннее руководство по Turbo Pascal
Академические преемники… Игнорирование индустрии
В последней части статьи Вирт подробно описывает все языки, последовавшие за исходным Паскалем, начиная с Modula-2 (который разделяет с Turbo Pascal понятие модулей или модулей компиляции, как мы их называем даже сегодня).
С этого момента статья фокусируется на Oberon, очень хорошем объектно-ориентированном расширении системы типов данных Pascal, но которое имело очень ограниченный успех по сравнению с Object Pascal от Apple и (особенно) Delphi.
Вирт пишет: «Даже сегодня Оберон успешно используется во многих местах. Однако такого прорыва, как у Паскаля, не произошло ». Хотя это правда, что Oberon не был прорывом, он не учитывает, что другое объектно-ориентированное расширение Паскаля, Delphi, имело огромную популярность в конце 90-х годов, сравнимую с популярностью Turbo Pascal в первые дни. Таким образом, хотя он формально прав в том, что академические версии Паскаля, такие как Оберон, имели ограниченный успех, ничто не сравнится с успехом многих диалектов Object Pascal в отрасли, включая, помимо прочего, Delphi.
Сегодня Delphi по-прежнему чрезвычайно успешен по сравнению с Oberon и любыми другими языками, производными от Паскаля, и остается одним из 20 наиболее часто используемых языков программирования, согласно большинству источников. Я не уверен, намеренно ли Вирт решил игнорировать Дельфи в своей истории Паскаля. Ясно, что он решил сосредоточиться только на своем академическом пути, своем пути к достижению совершенного языка Паскаля («Последовательность Паскаль – Модула – Оберон является свидетелем моих попыток достичь этого»). Однако одной из причин, по которой он должен гордиться Паскалем, является тот факт, что языки, производные от Паскаля, сегодня активно используются в индустрии. Игнорирование Delphi кажется мне вопиющим упущением.
Паскаль по-прежнему широко используется в мире ИТ благодаря Delphi сегодня, и его влияние на отрасль в целом остается сильным. Когда Вирт заявляет, что «многие из этих языков, такие как Java (Sun Microsystems) и C # (Microsoft), находились под сильным влиянием Oberon или Pascal», он упускает из виду тот факт, что именно Delphi, в большей степени, чем исходный Pascal или Oberon, оказал влияние на C # через идеи Андерса Хейлсберга , а также на Java через сотрудничество Borland и Sun над концепцией свойств.
IDE Delphi сегодня
Паскаль жив в Delphi
И снова здорово отметить 50-летие Паскаля, замечательного языка, оказавшего глубокое влияние на нашу отрасль. Но еще приятнее отмечать его вместе с 26-летием Delphi и после выпуска нашего выпуска 10.4.2, который обеспечивает беспрецедентную поддержку разработки клиентов для Windows 10 (один из лучших в отрасли), еще более быстрый компилятор, способный обрабатывать миллионы строк кода на основе Pascal за считанные минуты, а также уникальная возможность настроить таргетинг на многие операционные системы (Windows, Linux, macOS, Android, iOS) с одним и тем же исходным кодом, включая пользовательский интерфейс.
Delphi по-прежнему потрясает мир, поэтому мы должны большое спасибо Вирту, Хейлсбергу и Кану, а также разработчикам и менеджерам, которые поддерживали Delphi на протяжении многих лет, и отличной команде, работающей над этим сегодня.
The InterBase VAR program is here to help you take your ideas from paper to market. We know one size doesn’t fit all and each solution is unique; that is why the VAR Program exists. VARs can embed InterBase with their applications with a “silent install” and pay for licenses periodically as they are distributed. This licensing option and volume license discounts are possible by setting up a VAR agreement.
ThingConnect IoT device component pack is one of the best areas of RAD Studio. It offers you to connect to dozens of IoT devices with easy-to-use interfaces. For instance, you can connect to the Aeon Labs Light Bulb using Delphi or C++ Builder.
As you can see these light bulbs can be controlled by other devices. You can schedule events using your FireMonkey based mobile app.
procedure TAeotecLEDBulbApp.ReadBtn1Click(Sender: TObject);
var
Value: String;
begin
Memo1.Text := '';
if ComboBox1.ListItems[0].IsSelected then
begin
Value := FVeraAeotecLEDBulb.DeviceName;
Memo1.Text := Value;
end
else if ComboBox1.ListItems[1].IsSelected then
begin
Value := FVeraAeotecLEDBulb.ManufacturerName;
Memo1.Text := Value;
end
else if ComboBox1.ListItems[2].IsSelected then
begin
Value := FVeraAeotecLEDBulb.ModelName;
Memo1.Text := Value;
if Memo1.Text = '' then
Memo1.Text := 'Not exist';
end
else if ComboBox1.ListItems[3].IsSelected then
begin
Value := IntToStr(FVeraAeotecLEDBulb.DeviceID);
Memo1.Text := Value;
end
else if ComboBox1.ListItems[4].IsSelected then
begin
Value := IntToStr(FVeraAeotecLEDBulb.PnPID);
Memo1.Text := Value;
end;
end;
Arrays are one the most used structures in JavaScript programming, which is why it's important to know its built-in methods like the back of your pocket.
In this tutorial, we'll take a look at how to split an array into chunks of n size in JavaScript.
Specifically, we'll take a look at two approaches:
Using the slice() method and a for loop
Using the splice() method and a while loop
Splitting the Array Into Even Chunks Using slice() Method
The easiest way to extract a chunk of an array, or rather, to slice it up, is the slice() method:
slice(start, end) - Returns a part of the invoked array, between the start and end indices.
Note: Both start and end can be negative integers, which just denotes that they're enumerated from the end of the array. -1 being the last element of the array, -2 being the next to last and so on...
The array returned by slice() returns a shallow copy, meaning that any references within the original array will just be copied over as-is, and won't allocate memory for completely new objects.
So, to slice a list or array into even chunks, let's use the slice() method:
function sliceIntoChunks(arr, chunkSize) {
const res = [];
for (let i = 0; i < arr.length; i += chunkSize) {
const chunk = arr.slice(i, i + chunkSize);
res.push(chunk);
}
return res;
}
const arr = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10];
console.log(sliceIntoChunks(arr, 3));
Running the code above yields the following output:
[[ 1, 2, 3 ], [ 4, 5, 6 ], [ 7, 8, 9 ], [ 10
In the code above, we break down arr into smaller chunks of size 3, by iterating through the array and slicing it every chunkSize. In the last iteration, there'll be only one element (10) left, which will have to make up its own chunk.
Splitting the Array Into Even Chunks Using splice() Method
Even though the splice() method may seem similar to the slice() method, its use and side-effects are very different. Let's take a closer look:
// Splice does the following two things:
// 1. Removes deleteCount elements starting from startIdx (in-place)
// 2. Inserts the provided new elements (newElem1, newElem2...) into myArray starting with index startIdx (also in-place)
// The return value of this method is an array that contains all the deleted elements
myArray.splice(startIdx, deleteCount, newElem1, newElem2...)
let arrTest = [2, 3, 1, 4]
let chunk = arrTest.splice(0,2)
console.log(chunk) // [2, 3]
console.log(arrTest) // [1, 4]
Here we are using a while loop to traverse the array. In each iteration we perform the splicing operation and push each chunk into a resulting array until there are no more elements left in the original array (arr.length > 0).
A very important thing to note is that splice() changes the original array. where as slice() creates a copy of the original array, so there won't be any change in the original one.
Conclusion
In this article, we have went over a couple of easy ways to split a list into even chunks in JavaScript. While doing so, we learned how to work with couple of built-in array methods like slice() and splice().
It’s 50 years of the Pascal language and Delphi is its heir, empowering Pascal developers in today’s complex scenarios, despite being ignored by the Pascal language inventor
Niklaus Wirth published the paper “The programming language Pascal” in March 1971, which means it is exactly 50 years this month since the Pascal programming language was officially launched.
The renowned computer scientists celebrated the anniversary by writing a very interesting viewpoint article for Communications of the ACM (March 2021, Vol. 64 No. 3, Pages 39-41) and titled 50 Years of Pascal.
The article is fully worth reading and I suggest you to go over it before continuing with this blog post. I’ll wait here… Done? OK, good, here are my comments.
The first historic ad for Borland Turbo Pascal
Rooted on Type Safety
I want to start by mentioning that there is no reason to be shy of the Pascal inheritance that lives in Delphi. Pascal has been one of the most successful programming languages ever and it brought to the table concepts like type safety and a focus on code readability and maintainability that are core tenets of any programming language today.
As Wirth writes of the key ideas of Pascal, “a significant extension were data types and structures… most essential was the pervasive concept of data type… This contributed to the detection of errors, and this before the program’s execution”. In a world of dynamic languages, this remains a key idea and differentiator (and a reason for safer languages like TypeScript to exist).
Borland Made the Splash
While Pascal quickly gained acceptance in Universities, it took a few more years (starting with 1983) for it to become mainstream. As Wirth writes:
“Philippe Kahn at Borland Inc. in Santa Cruz surrounded our compiler with a simple operating system, a text editor, and routines for error discovery and diagnostics. They sold this package for $50 on floppy disks (Turbo Pascal). Thereby Pascal spread immediately, particularly in schools, and it became the entry point for many to programming and computer science.”
Having a very fast compiler was a key tenet of Turbo Pascal (and this is still true for Delphi today), along with an affordable price. And at the time DOS became mainstream, Turbo Pascal was so much more powerful than the built-in Visual Basic.
An early Turbo Pascal manual
Academic Successors… Ignoring the Industry
In the last part of the article Wirth goes at length into covering all of the languages that followed the original Pascal, starting with Modula-2 (which shares with Turbo Pascal the notion of compilation modules or units, as we call them even today).
From this point, the article focuses on Oberon, a very nice object-oriented extension of Pascal data type system, but one that had very limited success compared to Apple’s Object Pascal and (more notably) Delphi.
Wirth writes: “Even today Oberon is successfully in use in many places. A breakthrough like Pascal’s, however, did not occur.” While it is true that Oberon wasn’t a breakthrough, he fails to consider that a different object-oriented extension of Pascal, Delphi, had a huge popularity in the late 90ies, comparable to that of Turbo Pascal in the early days. So while he’s formally correct that academic versions of Pascal like Oberon had limited success, nothing compares to the success of the many Object Pascal dialects in the industry, including but not limited to Delphi.
Today Delphi is still extremely successful compared to Oberon and any other Pascal derived language and remains one of the 20 most used programming languages, according to most sources. I’m not sure if Wirth deliberately chose to ignore Delphi in his history of Pascal. It is clear he decided to focus only on his academic route, his journey to achieve the perfect Pascal language (“The sequence Pascal–Modula–Oberon is witness to my attempts to achieve it.”). However one of the reasons he should be proud of Pascal is the fact that Pascal-derived languages are actively used in the industry today. Ignoring Delphi seems like a glaring omission to me.
Pascal is still largely used in the IT world due to Delphi today and its impact in the industry at large remains powerful. When Wirth claims that “many of those languages, like Java (Sun Microsystems) and C# (Microsoft) have been strongly influenced by Oberon or Pascal” he misses the fact it was Delphi, more than the original Pascal or Oberon, to have influence on C# via the ideas of Anders Hejlsberg, but also on Java via the collaboration of Borland and Sun on the concept of properties.
The Delphi IDE today
Pascal Is Alive in Delphi
Again it is great to celebrate 50 years of Pascal, a remarkable language that deeply influenced our industry. But it is even nicer to celebrate it along with Delphi’s 26th anniversary and after our 10.4.2 release that brings an unparalleled support for Windows 10 client development (one of the best in the industry), an even faster compiler capable of going over millions of lines of Pascal-based code in minutes, and the unique ability to target many operating systems (Windows, Linux, macOS, Android, iOS) with the same source code including the user interface.
Delphi is still rocking the world, so we owe a big thank you to Wirth, Hejlsberg, and Kahn — but also to the developers and managers who kept Delphi alive and kicking over the years and the great team working on it today.
int _tmain(int argc, _TCHAR* argv[])
{
// Open a database file in create/write mode
SQLite::Database db("test.db3", SQLite::OPEN_READWRITE | SQLite::OPEN_CREATE); std::cout << "SQLite database file " << db.getFilename().c_str() << "¥n";
// Create a new table with an explicit "id" column aliasing the underlying rowid db.exec("DROP TABLE IF EXISTS test");
db.exec("CREATE TABLE test (id INTEGER PRIMARY KEY, value TEXT)");
// first row
db.exec("INSERT INTO test VALUES (NULL, 'test')");
// second row
db.exec("INSERT INTO test VALUES (NULL, 'second')");
// update the second row
db.exec("UPDATE test SET value='second-updated' WHERE id='2'");
// Check the results : expect two row of result
SQLite::Statement query(db, "SELECT * FROM test");
std::cout << "SELECT * FROM test :¥n";
while (query.executeStep())
{
std::cout << "row ("
<< query.getColumn(0) << ","
<< query.getColumn(1) << ")¥n";
}
getch();
return 0;
}
How do Delphi, WPF .NET Framework, and Electron perform compared to each other, and what’s the best way to make an objective comparison? Embarcadero commissioned a whitepaper to investigate the differences between Delphi, WPF .NET Framework and Electron for building Windows desktop applications. The benchmark application – a Windows 10 Calculator clone – was recreated in each framework by three Delphi Most Valuable Professionals (MVPs) volunteers, one expert freelance WPF developer, and one expert Electron freelance developer. In this blog post, we are going to explore the IP Security metric, which is part of the Functionality comparison used in the whitepaper.
What is IP Security in a deployable application?
How secure is the intellectual property of the source code in a deployable project? After businesses invest resources into their projects, they face the challenge of putting their product into the hands of the public while protecting the code and techniques that produce revenue. This qualitative metric evaluates the ability of a user to access source code via decompilation.
Intellectual property protection is fundamentally important to long-term business plans. If a product solves a new problem or utilizes a novel technique, the developers should understand how their choice of framework affects IP vulnerability. Delphi programs compile into platform-native machine code rather than intermediate code. Decompilation using free tools can recover the GUI form but only yields assembly code for the logic. IP security is more tenuous in WPF. Decompiling executable and library files with free tools results in recognizable C# business logic and nearly recognizable XAML text. Finally, Electron has the most significant problem – it gives away source code with each installation by default. Electron application code can be recovered with a simple text editor – a function of how the framework is structured – but can be somewhat obfuscated using third-3rd party tools. Available decompiler tools and their results when applied to each framework’s calculator application are listed below.
The goal of this decompilation exercise was to determine the feasibility of retrieving both the UI and the original code from each framework’s calculator application using open-source or free tools. The frameworks assessed were Delphi VCL, Delphi FMX, WPF (C#), and Electron (with Angular).
When the Delphi VCL and FMX calculators were decompiled, all UI elements were successfully extracted and the logic code was presented as assembly. This exercise did not extract function and procedure structure, but it may be possible.
Decompiling the WPF calculator yielded the UI elements and mostly recognizable C# code. WPF .NET Framework applications use a known MSIL (Microsoft Intermediate Language) format that is easy to disassemble and decompile. Dependent assemblies can easily be extracted. Resources can easily be extracted. .NET Reflection can be used to extract information about a .NET assembly. The entire contents can be extracted including the classes, methods, code, and resources from an assembly. An advanced decompiler can reconstruct almost the exact structure of your code including for/while loops, if statements, and try catch blocks. Literal strings can easily be extracted. Finally, calls to methods and properties to external assemblies can be extracted.
The UI elements and JavaScript code of the Electron calculator are easily exposed using a standard text editor. The Typescript code was transpiled into JavaScript and could not be recovered. Overall, Electron’s packaging provided a very limited level of obfuscation.
Let’s take a look at each framework.
Can Delphi applications be decompiled?
Delphi compiles to native machine code, eliminating much of the source code structure and metadata necessary for accurate decompilation and interpretation. Decompilation using a tool like DeDe will provide full details about the UI but only assembly code for the logic/back-end.
Decompilation Tools
DeDe – one of the most popular Delphi decompilers.
Interactive Delphi Reconstructor – a decompiler for Delphi executables and dynamic libraries.
MiTeC DFM Editor – a standalone editor for Delphi Form files (*.dfm) in both binary and text format.
DeDe Decompilation of Delphi VCLDFM Editor GUI Code View of Delphi VCLDFM Editor GUI Design View of Delphi VCLDelphi VCL Assembly Code Generated by IDR
Can WPF .NET Framework applications be decompiled?
WPF compiled to a Windows desktop application is converted to .dll and .baml files. Decompilation back to recognizable C# and near-perfect XAML is possible through 3rd party tools. Microsoft includes a community edition of Dotfuscator with Visual Studio but its license is for personal use only. Professional solutions for .NET obfuscation range from hundreds to thousands of dollars. There are also extra steps involved to protect an application with an obfuscation tool.
Decompilation Tools
WPF StylesExplorer – a WPF .baml decompiler and tool to explore .baml resources.
Snoop WPF – a tool to spy/browse the visual tree of a running WPF application without the need for a debugger.
JetBrains dotPeek – a .NET decompiler and assembly browser.
dotPeek Decompilation of WPF LogicdotPeek Decompilation of WPF GUISnoop WPF Decompilation of WPF GUI
Can Electron applications be decompiled?
Electron source code is packaged and deployed to the end-user’s system. Unless a developer uses third-3rd party tools to obfuscate code, the source code can be read verbatim using a simple text editor or by unpacking with a tool like asar.
Decompilation Tools
TextPad – a general purpose text editor for plaintext files.
asar – a simple file uncompressed concatenation archive format packing and unpacking tool.
Textpad Displaying Electron Logic CodeTextpad Displaying Electron UI Code
Overall, Delphi provides the most assured long-term outlook, best intellectual property security, and easiest in-house customization at the cost of a one-time commercial license purchase. WPF’s can be decompiled with ease in it’s default setup and requires extra steps and tools to obfuscate it’s code. Electron also can be decompiled with ease in it’s default setup. It requires extra steps and tools to obfuscate the code. An uncertain long-term outlook and relying on corporate sponsorships and community support for additional development are detrimental.
Ready to explore all the metrics in the “Discovering The Best Developer Framework Through Benchmarking” whitepaper?
Object Relational Mapping is the idea of being able to write queries using the object-oriented paradigm in your preferred programming language. So this means we are trying to utilize our language to talk with the database instead of using SQL.
Why utilize ORM?
It abstracts away the database system, so switching is easy.
Your queries can be efficient than writing them with SQL.
With ORM, you get lots of features out of the box for instance
Transactions
Migrations
Seeds
Streams
Connection Pooling
Delphi community has several Delphi ORM libraries, and the DORM (Delphi ORM) by Daniele Teti is one of the popular and open-source libraries you can use.
DORM has many features available:
Database agnostic (Do not require database changes!)
Has one, has many, and belongs to relations support
Mapping through file, attributes, or CoC
Save and retrieve objects graph, not only single objects
External (file) or internal (resource, json stream) configuration
Interfaces based!
FirebirdSQL (using UIB)
Interbase (using UIB)
SQLServer (using FireDAC driver)
SQLite3 (using this SQLite3 wrapper
and more!
procedure ObjVersionConcurrentTransactionsDEMO;
var
dormSession, dormSession1, dormSession2: TSession;
Customer, C1, C2: TCustomerV;
id: Integer;
begin
dormSession := TSession.CreateConfigured(
TStreamReader.Create(CONFIG_FILE), TdormEnvironment.deDevelopment);
try
Customer := TCustomerV.Create;
Customer.Name := 'Daniele Teti Inc.';
Customer.Address := 'Via Roma, 16';
Customer.EMail := 'daniele@danieleteti.it';
Customer.CreatedAt := date;
dormSession.Persist(Customer);
id := Customer.id;
WriteLn('Version: ', Customer.ObjVersion);
dormSession.Commit(true);
finally
dormSession.Free;
end;
// read the same object twice
dormSession1 := TSession.CreateConfigured(
TStreamReader.Create(CONFIG_FILE), TdormEnvironment.deDevelopment);
try
dormSession2 := TSession.CreateConfigured(
TStreamReader.Create(CONFIG_FILE), TdormEnvironment.deDevelopment);
try
// Two users gets the same record
WriteLn('User1 loads object ' + inttostr(id) + ' and close transaction');
C1 := dormSession1.Load<TCustomerV>(id);
dormSession1.Commit;
WriteLn('User2 loads object ' + inttostr(id) + ' and close transaction');
C2 := dormSession2.Load<TCustomerV>(id);
dormSession2.Commit;
// First user update the object and save it
C1.Name := 'John Doe';
C1.ObjStatus := osDirty;
WriteLn('User1 update object ' + inttostr(id));
dormSession1.Persist(C1);
dormSession1.Commit;
// The second user try to do the same
C2.Name := 'Jane Doe';
C2.ObjStatus := osDirty;
WriteLn('User2 try to update object ' + inttostr(id) + ' (an exception will be raised)');
dormSession2.Persist(C2); // raise EDORMLockingException
finally
dormSession2.Free;
end;
finally
dormSession.Free;
end;
end;
Function optimization is a field of study that seeks an input to a function that results in the maximum or minimum output of the function.
There are a large number of optimization algorithms and it is important to study and develop intuitions for optimization algorithms on simple and easy-to-visualize test functions.
Two-dimensional functions take two input values (x and y) and output a single evaluation of the input. They are among the simplest types of test functions to use when studying function optimization. The benefit of two-dimensional functions is that they can be visualized as a contour plot or surface plot that shows the topography of the problem domain with the optima and samples of the domain marked with points.
In this tutorial, you will discover standard two-dimensional functions you can use when studying function optimization.
Let’s get started.
Two-Dimensional (2D) Test Functions for Function Optimization Photo by DomWphoto, some rights reserved.
Tutorial Overview
A two-dimensional function is a function that takes two input variables and computes the objective value.
We can think of the two input variables as two axes on a graph, x and y. Each input to the function is a single point on the graph and the outcome of the function can be taken as the height on the graph.
This allows the function to be conceptualized as a surface and we can characterize the function based on the structure of the surface. For example, hills for input points that result in large relative outcomes of the objective function and valleys for input points that result in small relative outcomes of the objective function.
A surface may have one major feature or global optima, or it may have many with lots of places for an optimization to get stuck. The surface may be smooth, noisy, convex, and all manner of other properties that we may care about when testing optimization algorithms.
There are many different types of simple two-dimensional test functions we could use.
Nevertheless, there are standard test functions that are commonly used in the field of function optimization. There are also specific properties of test functions that we may wish to select when testing different algorithms.
We will explore a small number of simple two-dimensional test functions in this tutorial and organize them by their properties with two different groups; they are:
Unimodal Functions
Unimodal Function 1
Unimodal Function 2
Unimodal Function 3
Multimodal Functions
Multimodal Function 1
Multimodal Function 2
Multimodal Function 3
Each function will be presented using Python code with a function implementation of the target objective function and a sampling of the function that is shown as a surface plot.
All functions are presented as a minimization function, e.g. find the input that results in the minimum (smallest value) output of the function. Any maximizing function can be made a minimization function by adding a negative sign to all output. Similarly, any minimizing function can be made maximizing in the same way.
I did not invent these functions; they are taken from the literature. See the further reading section for references.
You can then choose and copy-paste the code one or more functions to use in your own project to study or compare the behavior of optimization algorithms.
Unimodal Functions
Unimodal means that the function has a single global optima.
A unimodal function may or may not be convex. A convex function is a function where a line can be drawn between any two points in the domain and the line remains in the domain. For a two-dimensional function shown as a contour or surface plot, this means the function has a bowl shape and the line between two remains above or in the bowl.
Let’s look at a few examples of unimodal functions.
Unimodal Function 1
The range is bounded to -5.0 and 5.0 and one global optimal at [0.0, 0.0].
# unimodal test function
from numpy import arange
from numpy import meshgrid
from matplotlib import pyplot
from mpl_toolkits.mplot3d import Axes3D
# objective function
def objective(x, y):
return x**2.0 + y**2.0
# define range for input
r_min, r_max = -5.0, 5.0
# sample input range uniformly at 0.1 increments
xaxis = arange(r_min, r_max, 0.1)
yaxis = arange(r_min, r_max, 0.1)
# create a mesh from the axis
x, y = meshgrid(xaxis, yaxis)
# compute targets
results = objective(x, y)
# create a surface plot with the jet color scheme
figure = pyplot.figure()
axis = figure.gca(projection='3d')
axis.plot_surface(x, y, results, cmap='jet')
# show the plot
pyplot.show()
Running the example creates a surface plot of the function.
Surface Plot of Unimodal Optimization Function 1
Unimodal Function 2
The range is bounded to -10.0 and 10.0 and one global optimal at [0.0, 0.0].
# unimodal test function
from numpy import arange
from numpy import meshgrid
from matplotlib import pyplot
from mpl_toolkits.mplot3d import Axes3D
# objective function
def objective(x, y):
return 0.26 * (x**2 + y**2) - 0.48 * x * y
# define range for input
r_min, r_max = -10.0, 10.0
# sample input range uniformly at 0.1 increments
xaxis = arange(r_min, r_max, 0.1)
yaxis = arange(r_min, r_max, 0.1)
# create a mesh from the axis
x, y = meshgrid(xaxis, yaxis)
# compute targets
results = objective(x, y)
# create a surface plot with the jet color scheme
figure = pyplot.figure()
axis = figure.gca(projection='3d')
axis.plot_surface(x, y, results, cmap='jet')
# show the plot
pyplot.show()
Running the example creates a surface plot of the function.
Surface Plot of Unimodal Optimization Function 2
Unimodal Function 3
The range is bounded to -10.0 and 10.0 and one global optimal at [0.0, 0.0]. This function is known as Easom’s function.
# unimodal test function
from numpy import cos
from numpy import exp
from numpy import pi
from numpy import arange
from numpy import meshgrid
from matplotlib import pyplot
from mpl_toolkits.mplot3d import Axes3D
# objective function
def objective(x, y):
return -cos(x) * cos(y) * exp(-((x - pi)**2 + (y - pi)**2))
# define range for input
r_min, r_max = -10, 10
# sample input range uniformly at 0.01 increments
xaxis = arange(r_min, r_max, 0.01)
yaxis = arange(r_min, r_max, 0.01)
# create a mesh from the axis
x, y = meshgrid(xaxis, yaxis)
# compute targets
results = objective(x, y)
# create a surface plot with the jet color scheme
figure = pyplot.figure()
axis = figure.gca(projection='3d')
axis.plot_surface(x, y, results, cmap='jet')
# show the plot
pyplot.show()
Running the example creates a surface plot of the function.
Surface Plot of Unimodal Optimization Function 3
Multimodal Functions
A multi-modal function means a function with more than one “mode” or optima (e.g. valley).
Multimodal functions are non-convex.
There may be one global optima and one or more local or deceptive optima. Alternately, there may be multiple global optima, i.e. multiple different inputs that result in the same minimal output of the function.
Let’s look at a few examples of multimodal functions.
Multimodal Function 1
The range is bounded to -5.0 and 5.0 and one global optimal at [0.0, 0.0]. This function is known as Ackley’s function.
# multimodal test function
from numpy import arange
from numpy import exp
from numpy import sqrt
from numpy import cos
from numpy import e
from numpy import pi
from numpy import meshgrid
from matplotlib import pyplot
from mpl_toolkits.mplot3d import Axes3D
# objective function
def objective(x, y):
return -20.0 * exp(-0.2 * sqrt(0.5 * (x**2 + y**2))) - exp(0.5 * (cos(2 * pi * x) + cos(2 * pi * y))) + e + 20
# define range for input
r_min, r_max = -5.0, 5.0
# sample input range uniformly at 0.1 increments
xaxis = arange(r_min, r_max, 0.1)
yaxis = arange(r_min, r_max, 0.1)
# create a mesh from the axis
x, y = meshgrid(xaxis, yaxis)
# compute targets
results = objective(x, y)
# create a surface plot with the jet color scheme
figure = pyplot.figure()
axis = figure.gca(projection='3d')
axis.plot_surface(x, y, results, cmap='jet')
# show the plot
pyplot.show()
Running the example creates a surface plot of the function.
Surface Plot of Multimodal Optimization Function 1
Multimodal Function 2
The range is bounded to -5.0 and 5.0 and the function as four global optima at [3.0, 2.0], [-2.805118, 3.131312], [-3.779310, -3.283186], [3.584428, -1.848126]. This function is known as Himmelblau’s function.
# multimodal test function
from numpy import arange
from numpy import meshgrid
from matplotlib import pyplot
from mpl_toolkits.mplot3d import Axes3D
# objective function
def objective(x, y):
return (x**2 + y - 11)**2 + (x + y**2 -7)**2
# define range for input
r_min, r_max = -5.0, 5.0
# sample input range uniformly at 0.1 increments
xaxis = arange(r_min, r_max, 0.1)
yaxis = arange(r_min, r_max, 0.1)
# create a mesh from the axis
x, y = meshgrid(xaxis, yaxis)
# compute targets
results = objective(x, y)
# create a surface plot with the jet color scheme
figure = pyplot.figure()
axis = figure.gca(projection='3d')
axis.plot_surface(x, y, results, cmap='jet')
# show the plot
pyplot.show()
Running the example creates a surface plot of the function.
Surface Plot of Multimodal Optimization Function 2
Multimodal Function 3
The range is bounded to -10.0 and 10.0 and the function as four global optima at [8.05502, 9.66459], [-8.05502, 9.66459], [8.05502, -9.66459], [-8.05502, -9.66459]. This function is known as Holder’s table function.
# multimodal test function
from numpy import arange
from numpy import exp
from numpy import sqrt
from numpy import cos
from numpy import sin
from numpy import e
from numpy import pi
from numpy import absolute
from numpy import meshgrid
from matplotlib import pyplot
from mpl_toolkits.mplot3d import Axes3D
# objective function
def objective(x, y):
return -absolute(sin(x) * cos(y) * exp(absolute(1 - (sqrt(x**2 + y**2)/pi))))
# define range for input
r_min, r_max = -10.0, 10.0
# sample input range uniformly at 0.1 increments
xaxis = arange(r_min, r_max, 0.1)
yaxis = arange(r_min, r_max, 0.1)
# create a mesh from the axis
x, y = meshgrid(xaxis, yaxis)
# compute targets
results = objective(x, y)
# create a surface plot with the jet color scheme
figure = pyplot.figure()
axis = figure.gca(projection='3d')
axis.plot_surface(x, y, results, cmap='jet')
# show the plot
pyplot.show()
Running the example creates a surface plot of the function.
Surface Plot of Multimodal Optimization Function 3
Further Reading
This section provides more resources on the topic if you are looking to go deeper.
Wie verhalten sich Delphi, WPF .NET Framework und Electron im Vergleich zueinander und wie lässt sich ein objektiver Vergleich am besten durchführen? Embarcadero gab ein Whitepaper in Auftrag , um die Unterschiede zwischen Delphi, WPF .NET Framework und Electron beim Erstellen von Windows-Desktopanwendungen zu untersuchen. Die Benchmark-Anwendung – ein Windows 10 Calculator-Klon – wurde in jedem Framework von drei freiwilligen Mitarbeitern von Delphi Most Valuable Professionals (MVPs), einem freiberuflichen WPF-Experten und einem freiberuflichen Electron-Entwickler neu erstellt. In diesem Blog-Beitrag werden wir die Hardware Access-Metrik untersuchen, die Teil des im Whitepaper verwendeten Flexibilitätsvergleichs ist. Die Rechner-App selbst verwendet nicht den Hardware-Zugriff in jedem Framework, sodass der Vergleich zwischen den Frameworks selbst erfolgt.
Zugriff auf gerätespezifische Hardware
Erleichtert das Framework den Zugriff auf Daten von Gerätesensoren (GPS, Mikrofon, Beschleunigungsmesser, Kamera usw.) und physische Aktionen über ähnliche Geräte? Frameworks, die die Türen zu einer Vielzahl von Sensoren und Aktoren öffnen, die heute auf intelligenten Geräten verfügbar sind, schaffen Geschäftsmöglichkeiten und neuartige Lösungen für die Schmerzen der Verbraucher.
Die Standardbibliotheken von Delphi bieten einfachen Zugriff auf nahezu alle verfügbaren Datenbanktypen und ermöglichen Entwicklern den Zugriff auf Betriebssystemfunktionen auf jeder Plattform sowie die Interaktion mit E / A-Geräten und Hardwaresensoren. WPF kann über .NET-Bibliotheken auf Windows-Betriebssystemfunktionen und E / A-Geräte zugreifen, jedoch mit verwaltetem Code nach der Kompilierung anstelle von nativem Code. Electron bietet Hardwarezugriff über den Prozess node.js und kann über die Bibliotheken node.js auf einige, aber nicht alle Betriebssystemfunktionen zugreifen.
Nach Überprüfung aller drei Frameworks ist Delphi aufgrund seiner flexiblen und automatisierten Bereitstellung auf allen wichtigen Plattformen, der Skalierbarkeit auf allen Entwicklungsstufen und des visuellen Designsystems führend in der Kategorie Flexibilität. WPF mit .NET Framework ist auf der Windows-Plattform wettbewerbsfähig, kann jedoch auf MacOS- oder Mobilgeräten nicht mithalten. Schließlich hat Electron die geringsten Markteintrittsbarrieren und die meisten Optionen für Entwicklungstools, stützt sich jedoch stark auf manuelle Bereitstellungen, kann standardmäßig nicht direkt auf mobile Geräte abzielen, ist am wenigsten skalierbar und verfügt nicht über den gleichen Hardware- und Betriebssystemzugriff wie seine Konkurrenten.
Werfen wir einen Blick auf jedes Framework.
Delphi-Hardwarezugriff
Das FMX-Framework von Delphi enthält Bibliotheken, die die Interaktion mit den peripheren Sensoren und Komponenten eines Geräts unabhängig von der Plattform ermöglichen. Diese Bibliotheken werden zu nativem Code kompiliert. Die Delphi RTL, der direkte Speicherzugriff und andere Funktionen auf niedriger Ebene ermöglichen den vollständigen Zugriff auf die Hardwareplattform, einschließlich Inline-Assemblycode auf x86-Desktopplattformen.
In Delphi können Kernelmodustreiber für Windows erstellt werden. Kernel-Modus-Treiber werden von Microsoft definiert als „Kernel-Modus-Treiber werden im Kernel-Modus als Teil der Exekutive ausgeführt, die aus Kernel-Modus-Betriebssystemkomponenten besteht, die E / A, Plug-and-Play-Speicher, Prozesse und Threads, Sicherheit und Sicherheit verwalten demnächst. Kernel-Modus-Treiber sind normalerweise geschichtet. Im Allgemeinen empfangen übergeordnete Treiber normalerweise Daten von Anwendungen, filtern die Daten und übergeben sie an einen untergeordneten Treiber, der die Gerätefunktionalität unterstützt. “
Delphi bietet einfachen Zugriff auf WMI und es gibt ein Open Source-Projekt, das schnell den Code generiert, den Sie benötigen. Laut Microsoft ist WMI definiert als „ist die Infrastruktur für Verwaltungsdaten und Vorgänge auf Windows-basierten Betriebssystemen.“
Die RTL bietet eine Komponente, TBluetooth , mit der Sie auf alle klassischen Bluetooth-Funktionen der RTL zugreifen können. Ziehen Sie eine TBluetooth Komponente aus der Werkzeugpalette auf ein Formular oder Datenmodul Ihrer Anwendung.
Ein Sensor misst eine physikalische Größe und wandelt sie in ein Signal um, das von einer Anwendung gelesen werden kann. System.Sensors.Components bietet Ihren Anwendungen Komponenten, mit denen Sie Informationen von vielen verschiedenen Arten von Hardwaresensoren abrufen können.
Dieses Beispielprojekt zeigt, wie die Kamera eines Geräts verwendet und bearbeitet wird. Das Beispiel zeigt die Verwendung der TCameraComponent .
WPF .NET Framework-Hardwarezugriff
WPF .NET Framework kann auf zahlreiche Windows-Bibliotheken für Sensoren, E / A-Geräte und andere Peripheriegeräte für PCs zugreifen. Der Zugriff von WPF auf Hardware erfolgt über verwalteten Code und nicht über nativen Code. Es gibt jedoch eine native (nicht verwaltete) Schnittstelle über P / Invoke. Diese Brücke beschränkt den Zugriff.
Zugriff auf Elektronenhardware
Electron kann über die Bibliotheken von node.js auf Betriebssystemfunktionen und Hardware-Peripheriegeräte zugreifen. Die plattformübergreifende Chromium-Basis ermöglicht den Zugriff auf Hardware auf hoher Ebene auf allen wichtigen Desktop-Plattformen. Der Zugriff von Electron auf Hardware erfolgt über verwalteten Code und nicht über nativen Code. Er kann nur auf Funktionen zugreifen, die über Bibliotheken verfügbar gemacht werden.
Entdecken Sie alle Metriken im Whitepaper „Ermitteln des besten Entwickler-Frameworks durch Benchmarking“:
¿Cómo funcionan Delphi, WPF .NET Framework y Electron en comparación entre sí, y cuál es la mejor manera de hacer una comparación objetiva? Embarcadero encargó un documento técnico para investigar las diferencias entre Delphi, WPF .NET Framework y Electron para crear aplicaciones de escritorio de Windows. La aplicación de referencia, un clon de la Calculadora de Windows 10, fue recreada en cada marco por tres voluntarios de Delphi Most Valuable Professionals (MVP), un desarrollador experto independiente de WPF y un desarrollador experto independiente Electron. En esta publicación de blog, vamos a explorar la métrica de acceso de hardware, que es parte de la comparación de flexibilidad utilizada en el documento técnico. La aplicación de la calculadora en sí no hace uso del acceso al hardware en cada marco, por lo que la comparación se realiza entre los marcos en sí.
Acceso a hardware específico del dispositivo
¿El marco facilita el acceso a los datos de los sensores del dispositivo (GPS, micrófono, acelerómetros, cámara, etc.) y la acción física a través de dispositivos similares? Los marcos que “abren las puertas” a la gran cantidad de sensores y actuadores disponibles en los dispositivos inteligentes en la actualidad crean oportunidades comerciales y soluciones novedosas para el dolor del consumidor.
Las bibliotecas estándar de Delphi brindan un fácil acceso a casi todos los tipos de bases de datos disponibles y permiten a los desarrolladores acceder a la funcionalidad del sistema operativo en cada plataforma, así como interactuar con dispositivos de E / S y sensores de hardware. WPF puede acceder a la funcionalidad del sistema operativo Windows y a los dispositivos de E / S a través de bibliotecas .NET pero con código administrado después de la compilación en lugar de código nativo. Electron proporciona acceso al hardware desde su proceso node.js y puede acceder a algunas, pero no a todas, las funciones del sistema operativo a través de las bibliotecas de node.js.
Después de revisar los tres marcos, Delphi lidera la categoría de flexibilidad debido a su implementación flexible y automatizada en todas las plataformas principales, escalabilidad a todos los niveles de desarrollo y sistema de diseño visual. WPF con .NET Framework es competitivo en la plataforma Windows, pero carece de la capacidad de competir en macOS o dispositivos móviles. Finalmente, Electron tiene la menor cantidad de barreras de entrada y la mayor cantidad de opciones de herramientas de desarrollo, pero depende en gran medida de las implementaciones manuales, no puede apuntar directamente a los dispositivos móviles de forma predeterminada, es el menos escalable y carece del mismo hardware y acceso al sistema operativo de sus competidores.
Echemos un vistazo a cada marco.
Acceso al hardware de Delphi
El marco FMX de Delphi incluye bibliotecas que permiten la interacción con los sensores y componentes periféricos de un dispositivo independientemente de la plataforma. Estas bibliotecas se compilan en código nativo. El Delphi RTL, el acceso directo a la memoria y otras características de bajo nivel le brindan acceso completo a la plataforma de hardware, incluido el código de ensamblaje en línea en las plataformas de escritorio x86.
Es posible crear controladores en modo kernel para Windows en Delphi. Los controladores en modo kernel son definidos por Microsoft como “Los controladores en modo kernel se ejecutan en modo kernel como parte del ejecutivo, que consta de componentes del sistema operativo en modo kernel que administran E / S, memoria Plug and Play, procesos e hilos, seguridad y pronto. Los controladores en modo kernel suelen estar en capas. Por lo general, los controladores de nivel superior suelen recibir datos de las aplicaciones, filtrar los datos y pasarlos a un controlador de nivel inferior que admita la funcionalidad del dispositivo “.
Delphi ofrece un fácil acceso a WMI y hay un proyecto de código abierto que generará rápidamente el código que necesita. Según Microsoft, WMI se define como “es la infraestructura para la gestión de datos y operaciones en sistemas operativos basados en Windows”.
El RTL proporciona un componente, TBluetooth , que le da acceso a todas las funciones clásicas de Bluetooth del RTL. Arrastre un TBluetooth componente de la paleta de herramientas a un formulario o módulo de datos de su aplicación.
Un sensor mide una cantidad física y la convierte en una señal que una aplicación puede leer. System.Sensors.Components proporciona a sus aplicaciones componentes que le permiten obtener información de muchos tipos diferentes de sensores de hardware.
Este proyecto de muestra muestra cómo usar y manipular la cámara de un dispositivo. El ejemplo demuestra el uso de TCameraComponent .
Acceso al hardware de WPF .NET Framework
WPF .NET Framework puede acceder a numerosas bibliotecas de Windows para sensores, dispositivos de E / S y otros periféricos para PC. El acceso de WPF al hardware se realiza mediante código administrado en lugar de código nativo, pero hay una interfaz nativa (no administrada) a través de P / Invoke. Este puente limita algunos accesos.
Acceso al hardware de electrones
Electron puede acceder a las funciones del sistema operativo y los periféricos de hardware a través de las bibliotecas node.js. Su base Chromium multiplataforma facilita el acceso al hardware de alto nivel en las principales plataformas de escritorio. El acceso de Electron al hardware se realiza a través de código administrado en lugar de código nativo y solo puede acceder a funciones expuestas a través de bibliotecas.
Explore todas las métricas en el documento técnico “Descubriendo el mejor marco para desarrolladores a través de la evaluación comparativa”:
Qual é o desempenho do Delphi, do WPF .NET Framework e do Electron em comparação entre si, e qual é a melhor maneira de fazer uma comparação objetiva? A Embarcadero encomendou um white paper para investigar as diferenças entre Delphi, WPF .NET Framework e Electron para a construção de aplicativos de desktop do Windows. O aplicativo de referência – um clone da Calculadora do Windows 10 – foi recriado em cada estrutura por três voluntários Delphi Most Valuable Professionals (MVPs), um desenvolvedor WPF freelance especialista e um desenvolvedor freelance Electron especialista. Nesta postagem do blog, vamos explorar a métrica de acesso a hardware, que faz parte da comparação de flexibilidade usada no white paper. O aplicativo de calculadora em si não faz uso do acesso de hardware em cada estrutura, portanto, a comparação é feita entre as próprias estruturas.
Acesso ao hardware específico do dispositivo
A estrutura facilita o acesso a dados de sensores de dispositivo (GPS, microfone, acelerômetros, câmera, etc.) e ação física por meio de dispositivos semelhantes? Estruturas que “abrem as portas” para a infinidade de sensores e atuadores disponíveis em dispositivos inteligentes hoje criam oportunidades de negócios e soluções inovadoras para o problema do consumidor.
As bibliotecas padrão da Delphi fornecem acesso fácil a quase todos os tipos de banco de dados disponíveis e permitem que os desenvolvedores acessem a funcionalidade do sistema operacional em todas as plataformas, bem como interajam com dispositivos de E / S e sensores de hardware. O WPF pode acessar a funcionalidade do sistema operacional Windows e dispositivos de E / S por meio de bibliotecas .NET, mas com código gerenciado após a compilação, em vez de código nativo. Electron fornece acesso ao hardware de seu processo node.js e pode acessar algumas, mas não todas as funções do sistema operacional por meio de bibliotecas node.js.
Depois de revisar todas as três estruturas, Delphi mantém a liderança na categoria de flexibilidade devido à sua implantação flexível e automatizada para todas as principais plataformas, escalabilidade para todos os níveis de desenvolvimento e sistema de design visual. O WPF com .NET Framework é competitivo na plataforma Windows, mas não tem capacidade de competir no macOS ou em dispositivos móveis. Finalmente, o Electron tem menos barreiras de entrada e mais opções de ferramentas de desenvolvimento, mas depende muito de implantações manuais, não pode visar dispositivos móveis diretamente por padrão, é o menos escalonável e não tem o mesmo acesso de hardware e sistema operacional de seus concorrentes.
Vamos dar uma olhada em cada estrutura.
Acesso de Hardware Delphi
A estrutura FMX da Delphi inclui bibliotecas que permitem a interação com os sensores e componentes periféricos de um dispositivo, independentemente da plataforma. Essas bibliotecas são compiladas em código nativo. O Delphi RTL, acesso direto à memória e outros recursos de baixo nível fornecem acesso total à plataforma de hardware, incluindo código de montagem em linha em plataformas de desktop x86.
É possível criar drivers de modo kernel para Windows no Delphi. Os drivers do modo kernel são definidos pela Microsoft como “drivers do modo kernel são executados no modo kernel como parte do executivo, que consiste em componentes do sistema operacional do modo kernel que gerenciam I / O, memória Plug and Play, processos e threads, segurança e em breve. Os drivers do modo kernel são normalmente em camadas. Geralmente, os drivers de nível superior normalmente recebem dados de aplicativos, filtram os dados e os passam para um driver de nível inferior que oferece suporte à funcionalidade do dispositivo ”.
Delphi oferece fácil acesso ao WMI e há um projeto de código aberto que irá gerar rapidamente o código que você precisa. De acordo com a Microsoft, WMI é definido como “é a infraestrutura para gerenciamento de dados e operações em sistemas operacionais baseados em Windows”.
O RTL fornece um componente, TBluetooth , que dá acesso a todos os recursos do Bluetooth clássico do RTL. Arraste um TBluetooth componente da Paleta de ferramentas para um formulário ou módulo de dados do seu aplicativo.
Um sensor mede uma quantidade física e a converte em um sinal que pode ser lido por um aplicativo. System.Sensors.Components fornece aos seus aplicativos componentes que permitem obter informações de muitos tipos diferentes de sensores de hardware.
Este exemplo de projeto mostra como usar e manipular a câmera de um dispositivo. O exemplo demonstra o uso do TCameraComponent .
Acesso a hardware WPF .NET Framework
O WPF .NET Framework pode acessar várias bibliotecas do Windows para sensores, dispositivos de E / S e outros periféricos para PCs. O acesso do WPF ao hardware é por meio de código gerenciado em vez de código nativo, mas há uma interface nativa (não gerenciada) por meio de P / Invoke. Esta ponte limita algum acesso.
Electron Hardware Access
O Electron pode acessar funções do sistema operacional e periféricos de hardware por meio de bibliotecas node.js. Sua base de plataforma cruzada do Chromium facilita o acesso de alto nível ao hardware em todas as principais plataformas de desktop. O acesso do Electron ao hardware é por meio de código gerenciado em vez de código nativo e só pode acessar recursos expostos por meio de bibliotecas.
Explore todas as métricas no white paper “Descobrindo a melhor estrutura de desenvolvedor por meio de benchmarking”:
Как работают Delphi, WPF .NET Framework и Electron по сравнению друг с другом и как лучше всего провести объективное сравнение? Embarcadero заказал технический документ для исследования различий между Delphi, WPF .NET Framework и Electron для создания настольных приложений Windows. Тестовое приложение — клон калькулятора Windows 10 — было воссоздано в каждой структуре тремя волонтерами Delphi Most Valuable Professionals (MVP), одним экспертом-фрилансером WPF-разработчиком и одним экспертом-фрилансером Electron. В этом сообщении блога мы собираемся изучить метрику доступа к оборудованию, которая является частью сравнения гибкости, используемого в техническом документе. Само приложение-калькулятор не использует аппаратный доступ в каждой платформе, поэтому сравнение проводится между самими платформами.
Доступ к аппаратному обеспечению для конкретного устройства
Облегчает ли структура доступ к данным с датчиков устройства (GPS, микрофон, акселерометры, камера и т. Д.) И физическое воздействие через аналогичные устройства? Структуры, которые «открывают двери» множеству датчиков и исполнительных механизмов, доступных сегодня на интеллектуальных устройствах, создают возможности для бизнеса и новаторские решения проблем потребителей.
Стандартные библиотеки Delphi обеспечивают легкий доступ практически ко всем доступным типам баз данных и позволяют разработчикам получать доступ к функциям операционной системы на каждой платформе, а также взаимодействовать с устройствами ввода-вывода и аппаратными датчиками. WPF может получить доступ к функциям операционной системы Windows и устройствам ввода-вывода через библиотеки .NET, но с управляемым кодом после компиляции, а не с собственным кодом. Electron предоставляет аппаратный доступ из своего процесса node.js и может получить доступ к некоторым, но не всем функциям операционной системы через библиотеки node.js.
После проверки всех трех фреймворков, Delphi удерживает лидирующую позицию в категории гибкости благодаря гибкому и автоматизированному развертыванию на всех основных платформах, масштабируемости на всех уровнях разработки и системе визуального дизайна. WPF с .NET Framework конкурентоспособен на платформе Windows, но не может конкурировать на macOS или мобильных устройствах. Наконец, у Electron наименьшее количество барьеров для входа и наибольшее количество инструментов разработки, но он в значительной степени полагается на развертывание вручную, не может напрямую ориентироваться на мобильные устройства по умолчанию, наименее масштабируем и не имеет такого же доступа к оборудованию и операционной системе, что и его конкуренты.
Давайте посмотрим на каждый фреймворк.
Аппаратный доступ Delphi
Фреймворк Delphi FMX включает библиотеки, которые позволяют взаимодействовать с периферийными датчиками и компонентами устройства независимо от платформы. Эти библиотеки компилируются в собственный код. Delphi RTL, прямой доступ к памяти и другие низкоуровневые функции предоставляют ему полный доступ к аппаратной платформе, включая встроенный ассемблерный код на настольных платформах x86.
В Delphi можно создавать драйверы режима ядра для Windows. Драйверы режима ядра определяются Microsoft как «драйверы режима ядра выполняются в режиме ядра как часть исполнительной системы, которая состоит из компонентов операционной системы режима ядра, которые управляют вводом-выводом, памятью Plug and Play, процессами и потоками, безопасностью и скоро. Драйверы режима ядра обычно многоуровневые. Как правило, драйверы более высокого уровня обычно получают данные от приложений, фильтруют данные и передают их драйверу более низкого уровня, который поддерживает функциональность устройства ».
Delphi предлагает легкий доступ к WMI, и есть проект с открытым исходным кодом, который быстро сгенерирует нужный вам код. Согласно Microsoft WMI определяется как «инфраструктура для управления данными и операций в операционных системах на базе Windows».
RTL предоставляет компонент TBluetooth , который дает вам доступ ко всем классическим функциям Bluetooth RTL. Перетащите TBluetooth компонент из палитры инструментов на форму или модуль данных вашего приложения.
Датчик измеряет физическую величину и преобразует ее в сигнал, который может считывать приложение. System.Sensors.Components предоставляет вашим приложениям компоненты, позволяющие получать информацию от различных типов аппаратных датчиков.
В этом примере проекта показано, как использовать и управлять камерой устройства. Пример демонстрирует использование TCameraComponent .
Доступ к оборудованию WPF .NET Framework
WPF .NET Framework может получить доступ к многочисленным библиотекам Windows для датчиков, устройств ввода-вывода и других периферийных устройств для ПК. Доступ WPF к оборудованию осуществляется через управляемый код, а не через собственный код, но есть собственный (неуправляемый) интерфейс через P / Invoke. Этот мост ограничивает некоторый доступ.
Доступ к электронному оборудованию
Electron может получить доступ к функциям операционной системы и аппаратной периферии через библиотеки node.js. Эта кроссплатформенная база Chromium обеспечивает доступ к высокому уровню оборудования на всех основных настольных платформах. Доступ Electron к оборудованию осуществляется через управляемый код, а не через собственный код, и он может получить доступ только к функциям, предоставляемым через библиотеки.
Изучите все показатели в техническом документе «Обнаружение лучшей среды разработки с помощью сравнительного анализа»:
Join Delphi MVP Ian Barker as he shows how to get the modern Windows 10 look and feel using RAD Studio’s themes and some *very* cost-effective third-party controls. Get that truly modern WOW factor kickstarted with minimal effort – Ian will show you how.
Schließen Sie sich Delphi MVP Ian Barker an und zeigen Sie, wie Sie das moderne Windows 10-Erscheinungsbild mithilfe der Themen von RAD Studio und einiger * sehr * kostengünstiger Steuerelemente von Drittanbietern erhalten. Starten Sie diesen wirklich modernen WOW-Faktor mit minimalem Aufwand – Ian wird Ihnen zeigen, wie.
Únase a Ian Barker, MVP de Delphi, mientras muestra cómo obtener el aspecto moderno de Windows 10 utilizando los temas de RAD Studio y algunos controles de terceros * muy * rentables. Obtenga ese factor WOW verdaderamente moderno con un mínimo esfuerzo: Ian le mostrará cómo hacerlo.
Junte-se ao MVP Ian Barker da Delphi para mostrar como obter a aparência moderna do Windows 10 usando os temas do RAD Studio e alguns controles de terceiros * muito * econômicos. Comece aquele fator WOW verdadeiramente moderno com o mínimo de esforço – Ian mostrará como.
Присоединяйтесь к Delphi MVP Яну Баркеру, который покажет, как получить современный внешний вид Windows 10 с помощью тем RAD Studio и некоторых * очень * экономичных сторонних элементов управления. Получите этот действительно современный WOW-фактор с минимальными усилиями — Ян покажет вам, как это сделать.
If you would like to create professional-looking instrumentation and multimedia applications with VCL and FireMonkey you should read this post!
What is TMS Instrumentation?
TMS Instrumentation is a library full of components, methods, and routines enabling you to create professional-looking instrumentation and a multimedia application. This component set contains more than 80 instrumentation and digital component like LEDs, scopes, banners, sliders, buttons, meters, and much more.
What are the components?
Meters
Sliders & Bars
LED styles
Counters
Multi-colored Matrix
Button and Graphics
Scope
procedure TfrmMeters.VrTimerTimer(Sender: TObject);
begin
if VrJogMeter.Value.Value = -160 then
FMeterUp := true;
if VrJogMeter.Value.Value = 160 then
FMeterUp := false;
if FMeterUp then
VrJogMeter.Value.Value := VrJogMeter.Value.Value + 1
else
VrJogMeter.Value.Value := VrJogMeter.Value.Value - 1;
VrCounter.Value := VrCounter.Value + 1;
if VrCounter.Value = 999999 then
VrCounter.Value := 0;
end;
procedure TfrmMeters.FormShow(Sender: TObject);
begin
Randomize
end;
procedure TfrmMeters.VrWheelChange(Sender: TObject);
begin
VrThermoMeter.Value.Value := VrWheel.Position;
VrTank.Position := VrWheel.Position;
VrMeter.Position := VrWheel.Position;
VrLevelBar.Position := VrWheel.Position;
end;
The visual components are just part of the component set. You also get several non-visual components to control the keyboard or to make efficient multithreaded application and more.
Non-visual components
TVrDirScan: non visual component for locating files on local or network drives
TVrRunOnce: disable multiple instances of the application
TVrTrayGauge: component to add progress indicator in system tray
and more
How to download TMS Instrumentation component set?
In JavaScript, an object is defined as a collection of key-value pairs. An object is also a non-primitive data type.
You'll oftentimes need to combine objects into a single one which contains all the individual properties of its constituent parts. This operation is called merging. The two most common ways of doing this are:
Using the spread operator (...)
Using the Object.assign() method
In this tutorial, we'll take a look at how to merge two objects dynamically in JavaScript.
After that, we'll cover the difference between shallow merge and deep merge, because it is essential to fully understanding object merging.
Merge JavaScript Objects Using the Spread Operator
We can merge different objects into one using the spread operator (...). This is also the most common method of merging two or more objects.
This is an immutable approach to merging two objects, i.e., the starting two objects which are used to make the merged one are not changed in any way due to side effects. In the end, you've got a new object, constructed from the two, while they're still intact.
Note: If there are common properties between these two objects, such as both of them having a location, the properties from the second object (job) will overwrite the properties of the first object (person):
If more than two objects are being merged, the rightmost object overrides the ones to the left.
Merge JavaScript Objects Using Object.assign()
Another common way to merge two or more objects is to use the built-in Object.assign() method:
Object.assign(target, source1, source2, ...);
This method copies all the properties from one or more source objects into the target object. Just like with the spread operator, while overwriting, the right-most value is used:
Again, keep in mind that the object referenced by employee is a completely new object, and is in no way linked to the objects referenced by person or job.
Shallow Merge vs Deep Merge
In the case of a shallow merge, if one of the properties on a source object is another object, the target object contains the reference to the same object that exists in the source object. A new object is not created in this case.
Let's tweak the previous person object, and make location an object for itself:
We can see that the reference to the location object in both person and employee object is the same. In fact, both the spread operator (...) and Object.assign() perform a shallow merge.
JavaScript has no deep merge support out of the box. However, there are third-party modules and libraries which do support it, like Lodash's _.merge.
Conclusion
In this tutorial, we've taken a look at how to merge two objects in JavaScript. We've explored the spread operator (...) and the Object.assign() method, which both perform a shallow merge of two or more objects, into a new one, without affecting the constituent parts.
A credential stuffing attack is a cybercrime technique where attackers use automated scripts and try them on a targeted website. It happens because the majority of users repeat similar credentials on more than one account. That means one data threat can also threaten several others. The attackers use tools like Sentry MBA to test such certificates in the highly automated bulk effort. Sometimes theft succeeds in login in allowing them to take advantage of services, stored credit card numbers, and other personal information.
The attackers inject username and password pairs to try unauthorized access to user accounts. Therefore, organizations need to stress the importance of using different passwords if one has more than one account. Using duplicate passwords for different accounts can be hazardous because once the hackers get to know one of the passwords; they will get access to any of your other accounts.
When you understand different ways that attackers use to access your business information, you will do everything possible to keep them at bay. Attackers are dangerous to any business as they can use that to bring down your business.
You can use multiple ways of detecting credential stuffing that you can apply to prevent any stuffing attack on your business. Here are examples of what you can do to catch the stuffing attack.
Several Login Attempts
Monitor your account and find out whether you can note several login attempts. Most of the multiple attempts happen when someone who is not the account owner tries several login credentials just if one of them will be accepted by the system. It can either be from one endpoint or several endpoints.
Separate IP Addresses
Detect known malicious endpoints attempting to log in using separate IP addresses or fingerprinting techniques. Also, check for automation of software in the login processes. Avoid scenarios where your company’s employees want to use various devices to log in to your system.
Removal Credential Attempt
Look for any attempts to remove credential-based login and to replace it with password-less authentication.
Tips on Preventing Credential Stuffing Attack
Use the following tips to keep attackers away from using credential stuffing attack on your business:
Multi-Factor Authentication
Multi-factor authentication (MFA) is one of the most effective ways of protecting your credential stuffing. It works by asking users to use additional authentication like using mobile as a defence to credential stuffing. The attacker bots are not capable of providing physical authentication method such as mobile phones. Most of them cannot even work with multiple authentications. Combining your authentication process with other techniques gives the attackers a hard time and prevents credential stuffing attacks.
Use of CAPTCHA
Use of CAPTCHA is another way of making sure the attackers do not access your account. The process of using the CAPTCHA requires users to perform specific actions to prove that they are human. That is an effective way of reducing credential stuffing attacks. However, the CAPTCHA method has limitations and can only be used in specific scenarios.
Use of Email Address
Avoid using email addresses as user IDs as the credential stuffing relies on the reuse of usernames or account IDs. User stuffing attack is most likely when you are using their email address as an account ID. Disallowing email ID is one of the most effective ways of reducing the credential stuffing attack.
The attackers use an account checker to try the stolen credentials on multiple websites like social media or online marketplaces. If the attempt works, the attacker can match different accounts with the stolen credentials. That is a quick way of draining the stolen account of any stored value like the credit card numbers or any other personally identifiable information. Using the stolen credentials, the attacker can create other transactions.
How to Use Imperva Bot Management as a Preventive Measure for Credential Stuffing Attacks
Imperva provides multi-layered protection to ensure that websites and applications are available and easy to access and keep them safe. The Imp[erva application works in the following ways:
DDoS Protection
DDoS maintains uptime in all cases preventing any type of DDoS attack by hindering access to your website and network infrastructure. Attackers can use your downtime to attack your accounts.
CDN Protection
CDN enhances website performance while reducing bandwidth costs with CDN-designed developers. You can also accelerate APIs and dynamic websites. It is essential to make sure that your business is protected at all times from any form of credential attacks.
WAF
Cloud-based solutions are effective in permitting traffic and preventing bad traffic, thus safeguarding applications at the edge. The best thing with is that it keeps applications and APIs in your network safe.
API Security
The IPS security is essential for protecting APIs by ensuring only the desired traffic can access your endpoint and keep everybody else away. It also detects and blocks your website from exploits of vulnerability.
There are several other methods of protecting the credential stuffing of your account. However, it is essential to make sure that you use the most effective and easy-to-use method. Sometimes it calls for multiple prevention measures just to make sure that you are completely protected. Your business needs proper protection from the credential stuffing attack as well as all the other cybersecurity attacks. Cybersecurity is critical for every business as a breach of security can break your business.
Regardless of the method you choose to protect your business, one most effective prevention method is making sure the employees are well trained. It is essential to train the staff and make sure they understand the effect of cyber attacks and how they can change everything in your business. Attackers can bring down your business in different ways.
They can steal not only important information but also business valuables like transferring money from accounts. They can also bring your business down by tinting your reputation. When customers and suppliers hear that you have been attacked, they will lose confidence in you and eventually limit their transactions with you. That will be the beginning of your business downfall and can lead to a collapse of your business.
すでに、10.4.2のトライアル版が利用可能になっており、今後製品を購入いただくと、10.4.2をダウンロードいただけるようになります。また、すでに製品をお持ちの方は、有効なアップデートサブスクリプションがあれば、既存のライセンスを使用してRAD Studio 10.4.2をご利用いただけます。10.4.2のダウンロードは、新しいカスタマーポータルサイト(my.embarcadero.com)から行えます。
How do Delphi, WPF .NET Framework, and Electron perform compared to each other, and what’s the best way to make an objective comparison? Embarcadero commissioned a whitepaper to investigate the differences between Delphi, WPF .NET Framework, and Electron for building Windows desktop applications. The benchmark application – a Windows 10 Calculator clone – was recreated in each framework by three Delphi Most Valuable Professionals (MVPs) volunteers, one expert freelance WPF developer, and one expert Electron freelance developer. In this blog post, we are going to explore the Hardware Access metric which is part of the flexibility comparison used in the whitepaper. The calculator app itself does not make use of the hardware access in each framework so the comparison is between frameworks themselves.
Access to Device-Specific Hardware
Does the framework facilitate access to data from device sensors (GPS, microphone, accelerometers, camera, etc.) and physical action through similar devices? Frameworks that “throw open the doors” to the multitude plethora of sensors and actuators available on smart devices today create business opportunities and novel solutions to consumer pain.
Delphi’s standard libraries provide easy access to nearly every database type available and allow developers to access operating system functionality on every platform as well as interact with I/O devices and hardware sensors. WPF can access Windows operating system functionality and I/O devices through .NET libraries but with managed code after compilation rather than native code. Electron provides hardware access from its node.js process and can access some but not all operating system functions via node.js libraries.
After reviewing all three frameworks, Delphi holds the lead in the flexibility category due to its flexible and automated deployment to all major platforms, scalability to every level of development, and visual design system. WPF with .NET Framework is competitive on the Windows platform but lacks the ability to compete on macOS or mobile devices. Finally, Electron has the fewest barriers to entry and the most development tool options but relies heavily on manual deployments, cannot target mobile devices directly by default, is the least scalable, and lacks the same hardware and operating system access of its competitors.
Let’s take a look at each framework.
Delphi Hardware Access
Delphi’s FMX framework includes libraries that allow interaction with a device’s peripheral sensors and components regardless of platform. These libraries compile into native code. The Delphi RTL, direct memory access, and other low level features give it full access to the hardware platform, including inline assembly code on x86 desktop platforms.
It is possible to create kernel mode drivers for Windows in Delphi. Kernel Mode Drivers are defined by Microsoft as “Kernel-mode drivers execute in kernel mode as part of the executive, which consists of kernel-mode operating system components that manage I/O, Plug and Play memory, processes and threads, security, and so on. Kernel-mode drivers are typically layered. Generally, higher-level drivers typically receive data from applications, filter the data, and pass it to a lower-level driver that supports device functionality.”
Delphi offers easy access to WMI and there is an open source project that will quickly generate the code you need. According to Microsoft WMI is defined as “is the infrastructure for management data and operations on Windows-based operating systems.”
The RTL provides a component, TBluetooth, that gives you access to all the Classic Bluetooth features of the RTL. Drag a TBluetooth component from the Tool Palette onto a form or data module of your application.
A sensor measures a physical quantity and converts it into a signal that can be read by an application. System.Sensors.Components provides your applications with components that let you obtain information from many different types of hardware sensors.
This sample project shows how to use and manipulate the camera of a device. The sample demonstrates the use of the TCameraComponent.
WPF .NET Framework Hardware Access
WPF .NET Framework can access numerous Windows libraries for sensors, I/O devices, and other peripherals for PCs. WPF’s access to hardware is through managed code rather than native code, but there is a native (unmanaged) interface through P/Invoke. This bridge limits some access.
Electron Hardware Access
Electron can access operating system functions and hardware peripherals through node.js libraries. It’s cross-platform Chromium base facilitates high level hardware access on all major desktop platforms. Electron’s access to hardware is through managed code rather than native code and can only access features exposed through libraries.
Explore all the metrics in the “Discovering The Best Developer Framework Through Benchmarking” whitepaper:
How do Delphi, WPF .NET Framework, and Electron perform compared to each other, and what’s the best way to make an objective comparison? Embarcadero commissioned a whitepaper to investigate the differences between Delphi, WPF .NET Framework, and Electron for building Windows desktop applications. The benchmark application – a Windows 10 Calculator clone – was recreated in each framework by three Delphi Most Valuable Professionals (MVPs) volunteers, one expert freelance WPF developer, and one expert Electron freelance developer. In this blog post, we are going to explore the Project Variety metric which is part of the flexibility comparison used in the whitepaper.
One thing that needs to be clear about this comparison is that Delphi is a single integrated IDE with multiple platform targets and frameworks for a number of different uses across the entire stack. WPF is a “UI framework that creates desktop client applications” (in Microsoft’s own words) and Electron is one solution for building cross-platform desktop client apps. The frameworks included with Delphi are VCL and FMX plus the RTL (run-time library) and project types like Windows services, TCP/IP server solutions, console apps, DLLs, web solutions which target IIS and Apache, and more. Delphi allows you to create Windows desktop client applications like WPF and Electron but you can also build so much more in Delphi than just desktop client apps. Everything from web applications to mobile applications to desktop applications. This is not a comparison with NodeJS and C# which offer many solutions that address other types of apps as well.
How does Project Diversity affect choosing an application framework?
Does the framework support development of different types of applications from stand-alone desktop apps to Windows services? Flexible frameworks allow developers to create mobile applications, desktop services, and everything in between.
Delphi’s major advantage over WPF and Electron is that its FMX framework can deploy one body of source code as a binary to any major desktop or mobile platform, maximizing a business’s reach to customers and minimizing code duplication and maintenance/upgrade headaches. It can support projects of every size from logic controllers for industrial automation to world-wide inventory management, and be developed for every tier from a database-heavy back end to the GUI client-side of an application. Finally, Delphi’s standard libraries provide easy access to nearly every database type available and allow developers to access operating system functionality on every platform as well as interact with I/O devices and hardware sensors.
WPF with .NET Framework targets Windows computers directly and provides cross-platform support through a browser deployment from a similar codebase. The framework is primarily geared toward client-side desktop applications but can incorporate business logic in C# for middle-tier or back-end functions and access the ADO .NET Entity Framework for databases. WPF can access Windows operating system functionality and I/O devices through .NET libraries but with managed code after compilation rather than native code.
Electron is an open-source framework targeting all desktop operating systems through its Chromium browser base. It focuses on client-side applications, typically web-centric, but uses node.js for middle-tier and back-end services. Electron provides hardware access from its node.js process and can access some but not all operating system functions via node.js libraries.
Let’s take a look at each framework.
Delphi
Delphi can be used to create applications on all levels from Windows services to Programmable Systems-on-Chip (PSOC) to enterprise applications with database, UI, and network components. Third-party tools extend Delphi applications to the web.
WPF with the .NET Framework focuses on developing “visually stunning desktop applications”. It has access to all Windows .NET functionality, including database access and multimedia tools.
Electron
Electron applications mimic desktop applications by running in the Chromium browser and are typically web-centric (i.e. collaboration, messaging, etc.). Electron uses node.js for native services, utilities, and back-end applications.
Delphi Provides The Most Framework Flexibility
After reviewing all three frameworks, Delphi holds the lead in the flexibility category due to its flexible and automated deployment to all major platforms, scalability to every level of development, and visual design system. WPF with .NET Framework is competitive on the Windows platform but lacks the ability to compete on macOS or mobile devices. Finally, Electron has the fewest barriers to entry and the most development tool options but relies heavily on manual deployments, cannot target mobile devices directly by default, is the least scalable, and lacks the same hardware and operating system access of its competitors.
Explore all the metrics in the “Discovering The Best Developer Framework Through Benchmarking” whitepaper:
Как работают Delphi, WPF .NET Framework и Electron по сравнению друг с другом и как лучше всего провести объективное сравнение? Embarcadero заказал технический документ для исследования различий между Delphi, WPF .NET Framework и Electron для создания настольных приложений Windows. Тестовое приложение — клон калькулятора Windows 10 — было воссоздано в каждой структуре тремя волонтерами Delphi Most Valuable Professionals (MVP), одним экспертом-фрилансером WPF-разработчиком и одним экспертом-фрилансером Electron. В этом сообщении блога мы собираемся изучить метрику Project Variety, которая является частью сравнения гибкости, используемого в техническом документе.
Одна вещь, которая должна быть ясна в этом сравнении, заключается в том, что Delphi представляет собой единую интегрированную среду IDE с несколькими целевыми платформами и фреймворками для множества различных применений во всем стеке. WPF — это «инфраструктура пользовательского интерфейса, которая создает настольные клиентские приложения».(по словам Microsoft), а Electron — одно из решений для создания кроссплатформенных настольных клиентских приложений. Фреймворки, включенные в Delphi, — это VCL и FMX, а также RTL (библиотека времени выполнения) и типы проектов, такие как службы Windows, серверные решения TCP / IP, консольные приложения, библиотеки DLL, веб-решения, предназначенные для IIS и Apache, и многое другое. Delphi позволяет создавать настольные клиентские приложения Windows, такие как WPF и Electron, но вы также можете создавать в Delphi гораздо больше, чем просто настольные клиентские приложения. Все, от веб-приложений до мобильных приложений и настольных приложений. Это не сравнение с NodeJS и C #, которые предлагают множество решений для других типов приложений.
Как разнообразие проектов влияет на выбор структуры приложения?
Поддерживает ли платформа разработку различных типов приложений от автономных настольных приложений до служб Windows? Гибкие фреймворки позволяют разработчикам создавать мобильные приложения, настольные сервисы и все, что между ними.
Основное преимущество Delphi перед WPF и Electron заключается в том, что его структура FMX может развертывать одну часть исходного кода в виде двоичного кода на любой основной настольной или мобильной платформе, максимально увеличивая доступ бизнеса к клиентам и сводя к минимуму дублирование кода и проблемы с обслуживанием / обновлением. Он может поддерживать проекты любого размера, от логических контроллеров для промышленной автоматизации до управления запасами по всему миру, и разрабатываться для каждого уровня — от серверной части с тяжелыми базами данных до клиентской части приложения с графическим интерфейсом. Наконец, стандартные библиотеки Delphi обеспечивают легкий доступ почти ко всем доступным типам баз данных и позволяют разработчикам получать доступ к функциям операционной системы на каждой платформе, а также взаимодействовать с устройствами ввода-вывода и аппаратными датчиками.
WPF с .NET Framework напрямую нацелен на компьютеры Windows и обеспечивает кроссплатформенную поддержку посредством развертывания браузера из аналогичной кодовой базы. Платформа в первую очередь ориентирована на клиентские настольные приложения, но может включать бизнес-логику на C # для функций среднего или внутреннего уровня и доступа к ADO .NET Entity Framework для баз данных. WPF может получить доступ к функциям операционной системы Windows и устройствам ввода-вывода через библиотеки .NET, но с управляемым кодом после компиляции, а не с собственным кодом.
Electron — это платформа с открытым исходным кодом, предназначенная для всех настольных операционных систем через базу браузера Chromium. Он ориентирован на клиентские приложения, обычно веб-ориентированные, но использует node.js для промежуточных и внутренних служб. Electron предоставляет аппаратный доступ из своего процесса node.js и может получить доступ к некоторым, но не всем функциям операционной системы через библиотеки node.js.
Давайте посмотрим на каждый фреймворк.
Delphi
Delphi можно использовать для создания приложений на всех уровнях от служб Windows до программируемых систем на кристалле (PSOC) и корпоративных приложений с базой данных, пользовательским интерфейсом и сетевыми компонентами. Сторонние инструменты расширяют приложения Delphi в Интернете.
WPF с .NET Framework фокусируется на разработке «визуально потрясающих настольных приложений». Он имеет доступ ко всем функциям Windows .NET, включая доступ к базе данных и мультимедийные инструменты.
Электрон
Приложения Electron имитируют настольные приложения, выполняясь в браузере Chromium, и обычно ориентированы на Интернет (например, совместная работа, обмен сообщениями и т. Д.). Electron использует node.js для собственных сервисов, утилит и серверных приложений.
Delphi обеспечивает максимальную гибкость фреймворка
После проверки всех трех фреймворков, Delphi удерживает лидирующую позицию в категории гибкости благодаря гибкому и автоматизированному развертыванию на всех основных платформах, масштабируемости на всех уровнях разработки и системе визуального дизайна. WPF с .NET Framework конкурентоспособен на платформе Windows, но не может конкурировать на macOS или мобильных устройствах. Наконец, у Electron наименьшее количество барьеров для входа и наибольшее количество инструментов разработки, но он в значительной степени полагается на развертывание вручную, не может напрямую ориентироваться на мобильные устройства по умолчанию, наименее масштабируем и не имеет такого же доступа к оборудованию и операционной системе, что и его конкуренты.
Изучите все показатели в техническом документе «Обнаружение лучшей среды разработки с помощью сравнительного анализа»:
Qual é o desempenho do Delphi, do WPF .NET Framework e do Electron em comparação entre si, e qual é a melhor maneira de fazer uma comparação objetiva? A Embarcadero encomendou um white paper para investigar as diferenças entre Delphi, WPF .NET Framework e Electron para a construção de aplicativos de desktop do Windows. O aplicativo de referência – um clone da Calculadora do Windows 10 – foi recriado em cada estrutura por três voluntários Delphi Most Valuable Professionals (MVPs), um desenvolvedor WPF freelance especialista e um desenvolvedor freelance Electron especialista. Nesta postagem do blog, vamos explorar a métrica Variedade de Projetos, que faz parte da comparação de flexibilidade usada no white paper.
Uma coisa que precisa ficar clara sobre essa comparação é que o Delphi é um único IDE integrado com vários destinos de plataforma e estruturas para vários usos diferentes em toda a pilha. WPF é uma “estrutura de interface do usuário que cria aplicativos cliente de desktop”(nas próprias palavras da Microsoft) e Electron é uma solução para construir aplicativos cliente de desktop multiplataforma. As estruturas incluídas no Delphi são VCL e FMX mais a RTL (biblioteca de tempo de execução) e tipos de projeto como serviços do Windows, soluções de servidor TCP / IP, aplicativos de console, DLLs, soluções web que visam IIS e Apache e muito mais. O Delphi permite que você crie aplicativos cliente de desktop do Windows como WPF e Electron, mas você também pode construir muito mais no Delphi do que apenas aplicativos cliente de desktop. Tudo, desde aplicativos da web a aplicativos móveis e aplicativos de desktop. Esta não é uma comparação com NodeJS e C #, que oferecem muitas soluções que abordam outros tipos de aplicativos também.
Como o Project Diversity afeta a escolha de uma estrutura de aplicativo?
A estrutura oferece suporte ao desenvolvimento de diferentes tipos de aplicativos, de aplicativos de área de trabalho autônomos a serviços do Windows? Estruturas flexíveis permitem que os desenvolvedores criem aplicativos móveis, serviços de desktop e tudo mais.
A principal vantagem da Delphi sobre o WPF e o Electron é que sua estrutura FMX pode implantar um corpo de código-fonte como um binário para qualquer grande desktop ou plataforma móvel, maximizando o alcance de uma empresa para os clientes e minimizando a duplicação de código e dores de cabeça de manutenção / atualização. Ele pode oferecer suporte a projetos de todos os tamanhos, desde controladores lógicos para automação industrial até gerenciamento de inventário em todo o mundo, e ser desenvolvido para cada camada, desde um back-end pesado de banco de dados até o lado do cliente GUI de um aplicativo. Finalmente, as bibliotecas padrão da Delphi fornecem acesso fácil a quase todos os tipos de banco de dados disponíveis e permitem que os desenvolvedores acessem a funcionalidade do sistema operacional em todas as plataformas, bem como interajam com dispositivos de E / S e sensores de hardware.
O WPF com .NET Framework tem como alvo os computadores Windows diretamente e fornece suporte de plataforma cruzada por meio de uma implantação de navegador a partir de uma base de código semelhante. A estrutura é principalmente voltada para aplicativos de desktop do lado do cliente, mas pode incorporar lógica de negócios em C # para funções de camada intermediária ou back-end e acessar o ADO .NET Entity Framework para bancos de dados. O WPF pode acessar a funcionalidade do sistema operacional Windows e dispositivos de E / S por meio de bibliotecas .NET, mas com código gerenciado após a compilação, em vez de código nativo.
Electron é uma estrutura de código aberto voltada para todos os sistemas operacionais de desktop por meio de sua base de navegador Chromium. Ele se concentra em aplicativos do lado do cliente, normalmente centrados na web, mas usa node.js para serviços de camada intermediária e back-end. Electron fornece acesso ao hardware de seu processo node.js e pode acessar algumas, mas não todas as funções do sistema operacional por meio de bibliotecas node.js.
Vamos dar uma olhada em cada estrutura.
Delphi
O Delphi pode ser usado para criar aplicativos em todos os níveis, desde serviços do Windows a Programmable Systems-on-Chip (PSOC) e aplicativos corporativos com banco de dados, interface do usuário e componentes de rede. Ferramentas de terceiros estendem os aplicativos Delphi para a web.
O WPF com .NET Framework se concentra no desenvolvimento de “aplicativos de desktop visualmente impressionantes”. Ele tem acesso a todas as funcionalidades do Windows .NET, incluindo acesso a banco de dados e ferramentas de multimídia.
Elétron
Os aplicativos Electron imitam os aplicativos de desktop ao serem executados no navegador Chromium e são tipicamente centrados na web (ou seja, colaboração, mensagens, etc.) Electron usa node.js para serviços nativos, utilitários e aplicativos de back-end.
Delphi oferece maior flexibilidade de framework
Depois de revisar todas as três estruturas, Delphi mantém a liderança na categoria de flexibilidade devido à sua implantação flexível e automatizada para todas as principais plataformas, escalabilidade para todos os níveis de desenvolvimento e sistema de design visual. O WPF com .NET Framework é competitivo na plataforma Windows, mas não tem capacidade de competir no macOS ou em dispositivos móveis. Finalmente, o Electron tem menos barreiras de entrada e mais opções de ferramentas de desenvolvimento, mas depende muito de implantações manuais, não pode visar dispositivos móveis diretamente por padrão, é o menos escalonável e não tem o mesmo acesso de hardware e sistema operacional de seus concorrentes.
Explore todas as métricas no white paper “Descobrindo a melhor estrutura de desenvolvedor por meio de benchmarking”:
¿Cómo funcionan Delphi, WPF .NET Framework y Electron en comparación entre sí, y cuál es la mejor manera de hacer una comparación objetiva? Embarcadero encargó un documento técnico para investigar las diferencias entre Delphi, WPF .NET Framework y Electron para crear aplicaciones de escritorio de Windows. La aplicación de referencia, un clon de la Calculadora de Windows 10, fue recreada en cada marco por tres voluntarios de Delphi Most Valuable Professionals (MVP), un desarrollador experto independiente de WPF y un desarrollador experto independiente Electron. En esta publicación de blog, vamos a explorar la métrica de Variedad de proyectos, que es parte de la comparación de flexibilidad utilizada en el documento técnico.
Una cosa que debe quedar clara sobre esta comparación es que Delphi es un único IDE integrado con múltiples objetivos de plataforma y marcos para varios usos diferentes en toda la pila. WPF es un “marco de interfaz de usuario que crea aplicaciones cliente de escritorio”(en las propias palabras de Microsoft) y Electron es una solución para crear aplicaciones cliente de escritorio multiplataforma. Los marcos incluidos con Delphi son VCL y FMX más RTL (biblioteca en tiempo de ejecución) y tipos de proyectos como servicios de Windows, soluciones de servidor TCP / IP, aplicaciones de consola, DLL, soluciones web dirigidas a IIS y Apache, y más. Delphi le permite crear aplicaciones de cliente de escritorio de Windows como WPF y Electron, pero también puede crear mucho más en Delphi que solo aplicaciones de cliente de escritorio. Todo, desde aplicaciones web hasta aplicaciones móviles y aplicaciones de escritorio. Esta no es una comparación con NodeJS y C #, que ofrecen muchas soluciones que también abordan otros tipos de aplicaciones.
¿Cómo afecta la diversidad de proyectos a la elección de un marco de aplicación?
¿El marco admite el desarrollo de diferentes tipos de aplicaciones, desde aplicaciones de escritorio independientes hasta servicios de Windows? Los marcos flexibles permiten a los desarrolladores crear aplicaciones móviles, servicios de escritorio y todo lo demás.
Delphi’s major advantage over WPF and Electron is that its FMX framework can deploy one body of source code as a binary to any major desktop or mobile platform, maximizing a business’s reach to customers and minimizing code duplication and maintenance/upgrade headaches. It can support projects of every size from logic controllers for industrial automation to world-wide inventory management, and be developed for every tier from a database-heavy back end to the GUI client-side of an application. Finally, Delphi’s standard libraries provide easy access to nearly every database type available and allow developers to access operating system functionality on every platform as well as interact with I/O devices and hardware sensors.
WPF con .NET Framework se dirige directamente a las computadoras con Windows y proporciona soporte multiplataforma a través de una implementación de navegador desde una base de código similar. El marco está dirigido principalmente a aplicaciones de escritorio del lado del cliente, pero puede incorporar lógica empresarial en C # para funciones de nivel intermedio o back-end y acceder a ADO .NET Entity Framework para bases de datos. WPF puede acceder a la funcionalidad del sistema operativo Windows y a los dispositivos de E / S a través de bibliotecas .NET pero con código administrado después de la compilación en lugar de código nativo.
Electron es un marco de código abierto dirigido a todos los sistemas operativos de escritorio a través de su base de navegador Chromium. Se enfoca en aplicaciones del lado del cliente, generalmente centradas en la web, pero usa node.js para servicios de nivel medio y back-end. Electron proporciona acceso al hardware desde su proceso node.js y puede acceder a algunas, pero no a todas, las funciones del sistema operativo a través de las bibliotecas de node.js.
Echemos un vistazo a cada marco.
Delphi
Delphi se puede utilizar para crear aplicaciones en todos los niveles, desde servicios de Windows hasta sistemas programables en chip (PSOC) y aplicaciones empresariales con base de datos, UI y componentes de red. Las herramientas de terceros extienden las aplicaciones Delphi a la web.
WPF con .NET Framework se centra en el desarrollo de “aplicaciones de escritorio visualmente impresionantes”. Tiene acceso a todas las funciones de Windows .NET, incluido el acceso a la base de datos y las herramientas multimedia.
Electrón
Las aplicaciones de Electron imitan las aplicaciones de escritorio al ejecutarse en el navegador Chromium y suelen estar centradas en la web (es decir, colaboración, mensajería, etc.). Electron usa node.js para servicios nativos, utilidades y aplicaciones de back-end.
Delphi proporciona la mayor flexibilidad de marco
Después de revisar los tres marcos, Delphi lidera la categoría de flexibilidad debido a su implementación flexible y automatizada en todas las plataformas principales, escalabilidad a todos los niveles de desarrollo y sistema de diseño visual. WPF con .NET Framework es competitivo en la plataforma Windows, pero carece de la capacidad de competir en macOS o dispositivos móviles. Finalmente, Electron tiene la menor cantidad de barreras de entrada y la mayor cantidad de opciones de herramientas de desarrollo, pero depende en gran medida de las implementaciones manuales, no puede apuntar directamente a los dispositivos móviles de forma predeterminada, es el menos escalable y carece del mismo hardware y acceso al sistema operativo de sus competidores.
Explore todas las métricas en el documento técnico “Descubriendo el mejor marco para desarrolladores a través de la evaluación comparativa”:
Wie verhalten sich Delphi, WPF .NET Framework und Electron im Vergleich zueinander und wie lässt sich ein objektiver Vergleich am besten durchführen? Embarcadero gab ein Whitepaper in Auftrag , um die Unterschiede zwischen Delphi, WPF .NET Framework und Electron beim Erstellen von Windows-Desktopanwendungen zu untersuchen. Die Benchmark-Anwendung – ein Windows 10 Calculator-Klon – wurde in jedem Framework von drei freiwilligen Mitarbeitern von Delphi Most Valuable Professionals (MVPs), einem freiberuflichen WPF-Experten und einem freiberuflichen Electron-Entwickler neu erstellt. In diesem Blog-Beitrag werden wir die Metrik „Projektvielfalt“ untersuchen, die Teil des im Whitepaper verwendeten Flexibilitätsvergleichs ist.
Eine Sache, die bei diesem Vergleich klar sein muss, ist, dass Delphi eine einzelne integrierte IDE mit mehreren Plattformzielen und Frameworks für eine Reihe unterschiedlicher Verwendungszwecke über den gesamten Stapel ist. WPF ist ein „UI-Framework, das Desktop-Client-Anwendungen erstellt“.(in Microsofts eigenen Worten) und Electron ist eine Lösung zum Erstellen plattformübergreifender Desktop-Client-Apps. Die in Delphi enthaltenen Frameworks sind VCL und FMX sowie die RTL (Laufzeitbibliothek) und Projekttypen wie Windows-Dienste, TCP / IP-Serverlösungen, Konsolen-Apps, DLLs, Weblösungen für IIS und Apache und mehr. Mit Delphi können Sie Windows-Desktop-Client-Anwendungen wie WPF und Electron erstellen, aber Sie können auch viel mehr in Delphi erstellen als nur Desktop-Client-Apps. Alles von Webanwendungen über mobile Anwendungen bis hin zu Desktopanwendungen. Dies ist kein Vergleich mit NodeJS und C #, die viele Lösungen anbieten, die auch andere Arten von Apps ansprechen.
Wie wirkt sich Project Diversity auf die Auswahl eines Anwendungsframeworks aus?
Unterstützt das Framework die Entwicklung verschiedener Arten von Anwendungen, von eigenständigen Desktop-Apps bis hin zu Windows-Diensten? Mit flexiblen Frameworks können Entwickler mobile Anwendungen, Desktop-Services und alles dazwischen erstellen.
Der Hauptvorteil von Delphi gegenüber WPF und Electron besteht darin, dass das FMX-Framework einen Quellcode als Binärdatei für alle wichtigen Desktop- oder Mobilplattformen bereitstellen kann, wodurch die Reichweite eines Unternehmens für Kunden maximiert und die Duplizierung von Code sowie die Probleme bei Wartung und Upgrade minimiert werden. Es kann Projekte jeder Größe unterstützen, von Logik-Controllern für die industrielle Automatisierung bis hin zur weltweiten Bestandsverwaltung, und für jede Ebene vom datenbankintensiven Back-End bis zur GUI-Client-Seite einer Anwendung entwickelt werden. Schließlich bieten die Standardbibliotheken von Delphi einfachen Zugriff auf nahezu alle verfügbaren Datenbanktypen und ermöglichen Entwicklern den Zugriff auf Betriebssystemfunktionen auf jeder Plattform sowie die Interaktion mit E / A-Geräten und Hardwaresensoren.
WPF mit .NET Framework zielt direkt auf Windows-Computer ab und bietet plattformübergreifende Unterstützung durch eine Browserbereitstellung aus einer ähnlichen Codebasis. Das Framework ist in erster Linie auf clientseitige Desktopanwendungen ausgerichtet, kann jedoch Geschäftslogik in C # für Middle-Tier- oder Back-End-Funktionen integrieren und auf das ADO .NET Entity Framework für Datenbanken zugreifen. WPF kann über .NET-Bibliotheken auf Windows-Betriebssystemfunktionen und E / A-Geräte zugreifen, jedoch mit verwaltetem Code nach der Kompilierung anstelle von nativem Code.
Electron ist ein Open-Source-Framework, das über seine Chromium-Browser-Basis auf alle Desktop-Betriebssysteme abzielt. Es konzentriert sich auf clientseitige Anwendungen, die normalerweise webzentriert sind, verwendet jedoch node.js für Middle-Tier- und Back-End-Dienste. Electron bietet Hardwarezugriff über den Prozess node.js und kann über die Bibliotheken node.js auf einige, aber nicht alle Betriebssystemfunktionen zugreifen.
Werfen wir einen Blick auf jedes Framework.
Delphi
Mit Delphi können Anwendungen auf allen Ebenen erstellt werden, von Windows-Diensten über programmierbare Systeme auf dem Chip (PSOC) bis hin zu Unternehmensanwendungen mit Datenbank-, Benutzeroberflächen- und Netzwerkkomponenten. Tools von Drittanbietern erweitern Delphi-Anwendungen auf das Web.
WPF mit .NET Framework konzentriert sich auf die Entwicklung von „visuell beeindruckenden Desktop-Anwendungen“. Es hat Zugriff auf alle Windows .NET-Funktionen, einschließlich Datenbankzugriff und Multimedia-Tools.
Elektron
Elektronenanwendungen ahmen Desktopanwendungen nach, indem sie im Chromium-Browser ausgeführt werden, und sind in der Regel webzentriert (z. B. Zusammenarbeit, Messaging usw.). Electron verwendet node.js für native Dienste, Dienstprogramme und Back-End-Anwendungen.
Delphi bietet die größte Rahmenflexibilität
Nach Überprüfung aller drei Frameworks ist Delphi aufgrund seiner flexiblen und automatisierten Bereitstellung auf allen wichtigen Plattformen, der Skalierbarkeit auf allen Entwicklungsstufen und des visuellen Designsystems führend in der Kategorie Flexibilität. WPF mit .NET Framework ist auf der Windows-Plattform wettbewerbsfähig, kann jedoch auf MacOS- oder Mobilgeräten nicht mithalten. Schließlich hat Electron die geringsten Eintrittsbarrieren und die meisten Optionen für Entwicklungstools, stützt sich jedoch stark auf manuelle Bereitstellungen, kann standardmäßig nicht direkt auf mobile Geräte abzielen, ist am wenigsten skalierbar und verfügt nicht über den gleichen Hardware- und Betriebssystemzugriff wie seine Konkurrenten.
Entdecken Sie alle Metriken im Whitepaper „Ermitteln des besten Entwickler-Frameworks durch Benchmarking“:
In this article, we will look at a famous algorithm in Graph Theory, Tarjan Algorithm. We will also look at an interesting problem related to it, discuss the approach and analyze the complexities.
Tarjan’s Algorithm is mainly used to find Strongly Connected Components in a directed graph. A directed graph is a graph made up of a set of vertices connected by edges, where the edges have a direction associated with them. A Strongly Connected Component in a graph is basically a self contained cycle within a directed graph where from each vertex in a given cycle we can reach every other vertex in that cycle.
Let us understand this with help of an example, consider this graph:
In the above graph, the box A and B show the SCC or Strongly Connected Components of the graph. Let us look at a few terminologies before explaining why the above components are SCC.
Back-Edge: So an edge of nodes (u,v) is a Back-Edge, if the edge from u to v has Descendent-Ancestor relationship. The node u is the descendant node and node v is the ancestor node. In this case, it results in a cycle and is important in forming a Strongly Connected Component.
Cross-Edge: An edge (u,v) is a Cross-Edge, if the edge from the u to v has no Ancestor-Descendent relationship. They are not responsible for forming a SCC. They mainly connect two SCC’s together.
Tree-Edge: If an edge (u,v) has a Parent-Child relationship such an edge is Tree-Edge. It is obtained during the DFS traversal of the tree which forms the DFS tree of the graph.
Explanation:
So, in the above graph edges -> (1 , 3), (3 , 2), (4 , 5), (5 , 6), (6 , 7) are the tree edges because they follow the Parent-Child Relationship. Edges -> (2 , 1) and (7 , 4) form the back edges because from node 2 (Descendent) we go back to 1 (Ancestor) completing a cycle (1->3->2). Similarly, from edge 7 we go back to 4 completing a cycle ( 4-> 5 -> 6 ->7). Hence the components (1,3,2) and (4,5,6,7) are the Strongly Connected Components of the graph. The edge (3 , 4) is a Cross edge because it follows no such relationship and connects the two SCC’s together.
Note: A Strongly Connected Component in a graph must have a Back-Edge to its head node.
Tarjan’s Algorithm
Now let us see how Tarjan’s Algorithm will help us find a Strongly Connected Component.
The idea is to do a Single DFS traversal of the graph which produces a DFS tree.
Strongly Connected Components are the subtrees of the DFS tree. If we find the head of each subtrees, we can get access to all the nodes in the subtree which is one SCC, then we can print the SCC including the head.
We will consider only the tree edges and back edges while traversing, we ignore the cross edges as it separates one SCC from another.
So now, let us look how to implement the above steps. We are going to assign each node a time value for when it is visited or discovered. At root or start node Time value is 0. For every node in the graph, we assign a tuple with two time values: Disc and Low.
Disc: This indicates the time for when a particular node is discovered or visited during DFS traversal. For each node we increase the Disc time value by 1.
Low: This indicates the node with lowest discovery time accessible from a given node. If there is a back edge then we update the low value based on some conditions. The maximum value Low for a node can be assigned is equal to the Disc value of that node since the minimum discovery time for a node is the time to visit/discover itself.
Note: The Disc value once assigned will not change while we keep on updating the low value traversing each node. We will discuss the condition next.
Implementation in Java
Step 1:
We use a Map (Hash-Map) to store the graph nodes and edges. The Key of map stores the nodes and in the value we have a list which represents the edges from that node. For the Disc and Low we use two integer arrays of size same as a number of vertices. We fill both the arrays with -1, to indicate no nodes are visited initially. We use a Stack (for DFS) and a Boolean array inStack to check whether an already discovered node is present in our Stack in O(1) time as checking in the stack will be a costly operation (O(n)).
Step 2:
So, for each node we process we add it into our stack and mark true in the array inStack. We maintain a static Timer variable initialized to 0. If for an edge (u,v) if v node is already present in stack, then it is a back edge and (u,v) pair is Strongly Connected. So we change the low value as :
if(Back-Edge) then Low[u] = Min ( Low[u] , Disc[v] ).
After visiting this node on returning the call to its parent node we will update the Low value for each node to ensure that Low value remains the same for all nodes in the Strongly Connected Component.
Step 3:
Now if for an edge (u,v) if v node is not present in stack then it is a tree edge or a neighboring edge. In such case, we update the low value for that particular node as :
if (Tree-Edge) then Low[u] = Min ( Low[u] , Low[v] ).
We determine the head or start node of each SCC when we get a node whose Disc[u] = Low[u], such a node is the head node of that SCC. Every SCC should have such a node maintaining this condition. After this, we just print the nodes by popping them out of the stack marking the inStack as false for each popped value.
Note: A Strongly Connected Component must have all its low values same. We will print the nodes in reverse order.
Now, let us look at the code for this in Java:
import java.util.*;
public class TarjanSCC
{
static HashMap<Integer,List<Integer>> adj=new HashMap<>();
static int Disc[]=new int[8];
static int Low[]=new int[8];
static boolean inStack[]=new boolean[8];
static Stack<Integer> stack=new Stack<>();
static int time = 0;
static void DFS(int u)
{
Disc[u] = time;
Low[u] = time;
time++;
stack.push(u);
inStack[u] = true;
List<Integer> temp=adj.get(u); // get the list of edges from the node.
if(temp==null)
return;
for(int v: temp)
{
if(Disc[v]==-1) //If v is not visited
{
DFS(v);
Low[u] = Math.min(Low[u],Low[v]);
}
//Differentiate back-edge and cross-edge
else if(inStack[v]) //Back-edge case
Low[u] = Math.min(Low[u],Disc[v]);
}
if(Low[u]==Disc[u]) //If u is head node of SCC
{
System.out.print("SCC is: ");
while(stack.peek()!=u)
{
System.out.print(stack.peek()+" ");
inStack[stack.peek()] = false;
stack.pop();
}
System.out.println(stack.peek());
inStack[stack.peek()] = false;
stack.pop();
}
}
static void findSCCs_Tarjan(int n)
{
for(int i=1;i<=n;i++)
{
Disc[i]=-1;
Low[i]=-1;
inStack[i]=false;
}
for(int i=1;i<=n;++i)
{
if(Disc[i]==-1)
DFS(i); // call DFS for each undiscovered node.
}
}
public static void main(String args[])
{
adj.put(1,new ArrayList<Integer>());
adj.get(1).add(3);
adj.put(2,new ArrayList<Integer>());
adj.get(2).add(1);
adj.put(3,new ArrayList<Integer>());
adj.get(3).add(2);
adj.get(3).add(4);
adj.put(4,new ArrayList<Integer>());
adj.get(4).add(5);
adj.put(5,new ArrayList<Integer>());
adj.get(5).add(6);
adj.put(6,new ArrayList<Integer>());
adj.get(6).add(7);
adj.put(7,new ArrayList<Integer>());
adj.get(7).add(4);
findSCCs_Tarjan(7);
}
}
Output:
SCC is: 7 6 5 4
SCC is: 2 3 1
The code is written for the same example as discussed above, you can see the output showing the Strongly Connected Components in reverse order since we use a Stack. Now let us look at the complexities of our approach.
Time Complexity: We are basically doing a Single DFS Traversal of the graph so time complexity will be O( V+E ). Here, V is the number of vertices in the graph and E is the number of edges.
Space Complexity: We at the most store the total vertices in the graph in our map, stack, and arrays. So, the overall complexity is O(V).
So that’s it for the article you can try out different examples and execute the code in your Java Compiler for better understanding.
Let us know any suggestions or doubts regarding the article in the comment section below.
Embarcadero offers a set of components and libraries to work with Bluetooth Low Energy or just BLE-enabled devices. For instance, you can utilize the new TBluetoothLE component to implement the RTL BluetoothLE feature for server and client applications using with FireMonkey.
Furthermore, the RTL provides a BLE scan filter implementation that takes advantage of the new BLE low consumption chips. Probably you can connect any BLE-enabled devices using built-in Bluetooth components which comes with RAD Studio.
In addition to that, there are third-party BLE libraries or components available for Delphi & C++ Builder developers. For instance, IPWorks BLE library.
What is IPWorks BLE Library?
IPWorks BLE includes a set of powerful components for integrating Bluetooth Low Energy communications capabilities into the web, desktop, and more applications. The BLEClient component provides a simple but flexible BLE GATT client implementation and descriptors exposed by BLE GATT servers on BLE devices.
procedure TFormBLEClient.iplBLEClient1Advertisement(Sender: TObject; const ServerId,
Name: string; RSSI, TxPower: Integer; const ServiceUuids, ServicesWithData,
SolicitedServiceUuids: string; ManufacturerCompanyId: Integer;
ManufacturerData: string; ManufacturerDataB: TBytes; IsConnectable, IsScanResponse: Boolean);
var
serverIdExists: Boolean;
I: Integer;
begin
// This method will check to see if the ServerID already exists. If it does not,
// then display advertisement information for it.
serverIdExists := false;
if ServerIds.Count = 0 then
begin
ServerIds.Add(ServerId);
end
else
begin
for I := 0 to ServerIds.Count-1 do
begin
if ServerIds[I] = ServerId then
begin
serverIdExists := true;
break;
end;
end;
end;
if not serverIdExists then
begin
ServerIds.Add(ServerId);
lvAdvertisements.Items.Add();
lvAdvertisements.Items[lvAdvertisements.Items.Count-1].Caption := ServerId;
lvAdvertisements.Items.Item[lvAdvertisements.Items.Count-1].SubItems.Add(Name);
lvAdvertisements.Items.Item[lvAdvertisements.Items.Count-1].SubItems.Add(inttostr(RSSI));
lvAdvertisements.Items.Item[lvAdvertisements.Items.Count-1].SubItems.Add(inttostr(TxPower));
lvAdvertisements.Items.Item[lvAdvertisements.Items.Count-1].SubItems.Add(booltostr(IsConnectable, True));
lvAdvertisements.Items.Item[lvAdvertisements.Items.Count-1].SubItems.Add(ServiceUuids);
lvAdvertisements.Items.Item[lvAdvertisements.Items.Count-1].SubItems.Add(ServicesWithData);
lvAdvertisements.Items.Item[lvAdvertisements.Items.Count-1].SubItems.Add(inttostr(ManufacturerCompanyId));
lvAdvertisements.Items.Item[lvAdvertisements.Items.Count-1].SubItems.Add(TBytesToHex(ManufacturerDataB)); // Needs to be converted to hex string
end
end;
IPWorks BLE Library Features:
Complete GATT Client component
User-friendly scanning and service discovery
Full support for reading, writing and subscribing to characteristics
Fast, robust, reliable, and thread-safe capabilities
and more
How do I get the IPWorks BLE Library?
Head over and check out the BLE library on the GetIt portal and download it in the IDE using the GetIt Package Manager.
In this tutorial, we'll take a look at how to parse Datetime with parsedatetime in Python.
To use the parsedatetime package we first need to install it using pip:
$ pip install parsedatetime
Should pip install parsedatetime fail, the package is also open-source and available on Github.
Convert String to Python's Datetime Object with parsedatetime
The first, and most common way to use parsedatetime is to parse a string into a datetime object. First, you'll want to import the parsedatetime library, and instantiate a Calendar object, which does the actual input, parsing and manipulation of dates:
Now we can call the parse() method of the calendar instance with a string as an argument. You can put in regular datetime-formatted strings, such as 1-1-2021 or human-readable values such as tomorrow, yesterday, next year, last week, lunch tomorrow, etc... We can also use 'End of Day' structures with tomorrow eod
Let's convert a datetime and human-readable string to a datetime object using parsedatetime:
This isn't very human-readable... The returned tuple for each conversion consists of the struct_time object, which contains information like the year, month, day of month, etc. The second value is the status code - an integer denoting how the conversion went.
0 means unsuccessful parsing, 1 means successful parsing to a date, 2 means successful parsing to a time and 3 means successful parsing to a datetime.
Then again, we're only getting the day of the month here. Usually, we'd like to output something similar to a YYYY-mm-dd HH:mm:ss format, or any variation of that.
Thankfully, we can easily use the time.struct_time result and generate a regular Python datetime with it:
The datetime() constructor doesn't need all of the information from the time structure provided by parsedatetime, so we sliced it.
This code results in:
2021-03-19 09:00:00
2021-01-01 18:11:06
Keep in mind that the datetime on the 1st of January took the time of execution into consideration.
Handling Timezones
Sometimes, your application might have to take the timezones of your end-users into consideration. For timezone-support, we usually use the Pytz package, though, you can use other packages as well.
Let's install Pytz via pip:
$ pip install pytz
Now, we can import the parsedatetime and pytz packages into a script, and create a standard Calendar instance:
Let's chose one of these, such as the first one, and pass it in as the tzinfo argument of Calendar's parseDT() function. Other than that, we'll want to supply a datetimeString argument, which is the actual string we want to parse:
datetime_object, status = calendar.parseDT(datetimeString='tomorrow', tzinfo=timezone('Africa/Abidjan'))
This method returns a tuple of a Datetime object, and the status code of the conversion, which is an integer - 1 meaning "successful", and 0 meaning "unsucessful".
Let's go ahead and print the datetime_object:
print(datetime_object)
This code results in:
2021-03-16 09:00:00+00:00
Calendar.parseDate()
While Calendar.parse() is a general-level parsing method, that returns a tuple with the status code and time.struct_time, the parseDate() method is a method dedicated to short-form string dates, and simply returns a human-readable result:
import parsedatetime
calendar = parsedatetime.Calendar()
result = calendar.parseDate('5/5/91')
print(result)
The result now contains the calculated struct_time value of the date we've passed in:
(1991, 5, 5, 14, 31, 18, 0, 74, 0)
But, what do we do when we want to parse the 5th of May 2077? We can try to run the following code:
import parsedatetime
calendar = parsedatetime.Calendar()
result = calendar.parseDate('5/5/77')
print(result)
However, this code will result in:
(1977, 5, 5, 14, 36, 21, 0, 74, 0)
Calendar.parseDate() mistook the short-form date, for a more realistic 1977. We can solve this in two ways:
Simply specify the full year - 2077:
import parsedatetime
calendar = parsedatetime.Calendar()
result = calendar.parseDate('5/5/2077')
print(result)
Use a BirthdayEpoch:
import parsedatetime
constants = parsedatetime.Constants()
constants.BirthdayEpoch = 80
# Pass our new constants to the Calendar
calendar = parsedatetime.Calendar(constants)
result = calendar.parseDate('5/5/77')
print(result)
This code will result in:
(2077, 5, 5, 14, 39, 47, 0, 74, 0)
You can access the contants of the parsedatetime library through the Constants object. Here, we've set the BirthdayEpoch to 80.
BirthdayEpoch controls how the package handles two-digit years, such as 77. If the parsed value is lesser than the value we've set for the BirthdayEpoch - it'll add the parsed value to 2000. Since we've set the BirthdayEpoch to 80, and parsed 77, it converts it to 2077.
Otherwise, it'll add the parsed value to 1900.
Calendar.parseDateText()
Another alternative to dealing with the issue of mistaken short-form dates is to, well, use long-form dates. For long-form dates, you can use the parseDateText() method:
The method returns a struct_time, so we can easily convert it into a datetime:
print(datetime(*result[:6]))
This results in:
2021-03-28 22:08:40
Conclusion
In this tutorial, we've gone over several ways to parse datetime using the parsedatetime package in Python.
We went over the conversion between strings and datetime objects through parsedatetime, as well as handling timezones with pytz and locales, using the Constants instance of the parsedatetime library.
You get built-in components to work with Microsoft Azure and Amazon Web Services. For instance, Data.Cloud.AmazonAPI contains classes that implement the API for using the Amazon services (such as queue, table). And the TAmazonStorageService class allows you to connect to the Amazon Simple Storage Service (S3) service easily.
Here are some of the methods that allow you to manage buckets and objects:
var
ResponseInfo: TCloudResponseInfo;
StorageService: TAmazonStorageService;
BucketName:String;
begin
BucketName := 'my-bucket-name-vjsep967w37'; // the bucket name must be unique
StorageService := TAmazonStorageService.Create(AmazonConnectionInfo1);
ResponseInfo := TCloudResponseInfo.Create;
try
if StorageService.CreateBucket(BucketName, amzbaNotSpecified, amzrNotSpecified, ResponseInfo) then
Memo1.Lines.Append('Success! Bucket: ' + BucketName + ' created.')
else
Memo1.Lines.Append(Format('Failure! %s', [ResponseInfo.StatusMessage]));
finally
StorageService.Free;
ResponseInfo.Free;
end;
end;
You can find full documentation on the docwiki portal.
In addition to that, some tech partners of the Embarcadero provide a set of components to access services like Amazon S3, Digital Ocean Spaces, and more.
To be more clear, apart from getting complete implementation of the Amazon S3 interface, you get support for all major S3 storage providers.
What is the S3 Library?
Easily connect to Amazon S3 and other S3-compatible storage providers such as Digital Ocean Spaces, Wasabi, Backblaze B2, IBM Cloud Object Storage, Oracle Cloud, Linode, and others.
S3 Library Features
Use local or custom S3 providers such as MinIO and Ceph
Components are thread-safe on critical members
Fast, robust, reliable, and native components
Complete documentation and sample applications
and more
procedure TFormS3client.bGoClick(Sender: TObject);
var inputStr:string;
begin
try
Screen.Cursor := crHourGlass;
is3S3Client1.AccessKey := tAccessKey.Text;
is3S3Client1.SecretKey := tSecretKey.Text;
if cboProvider.ItemIndex = -1 then
cboProvider.ItemIndex := 0;
if cboProvider.ItemIndex = 10 then
is3S3Client1.ServiceProvider := Tis3ServiceProviders(255)
Else
is3S3Client1.ServiceProvider := Tis3ServiceProviders(cboProvider.ItemIndex);
if is3S3Client1.ServiceProvider = Tis3ServiceProviders.spCustom then
if InputQuery('Enter Custom URL', 'Custom URL?', inputStr) then
is3S3Client1.Config('URL='+inputStr);
if is3S3Client1.ServiceProvider = Tis3ServiceProviders.spOracle then
if InputQuery('Enter Oracle Namespace', 'Oracle Namespace?', inputStr) then
is3S3Client1.Config('OracleNamespace='+inputStr);
lvwBuckets.Items.Clear();
is3S3Client1.ListBuckets();
except on ex: EIPWorksS3 do
ShowMessage('Exception: ' + ex.Message);
end;
Screen.Cursor := crDefault;
end;
How to get the IPWorks S3 Library?
You can head over and check out the S3 Library on the GetIt portal and download it in the IDE using the GetIt Package Manager.
RAD Studio 10.4.2 планировалась как функционально ориентированное продолжение версии 10.4.1, ориентированной на качество. Однако, помимо предоставления некоторых основных функций, мы также исправили больше проблем в 10.4.2, чем в предыдущем выпуске!
Это относится как к Code Insight, или DelphiLSP, так и к другим частям Delphi 10.4.2. Посмотрим, что нового. Во-первых, особенности…
Анализ ошибок — теперь анализ ошибок, предупреждений и подсказок
В течение многих лет вы могли видеть ошибки кода, обнаруженные заранее перед компиляцией, что показано красным зигзагообразным подчеркиванием в редакторе кода ( красная волнистая линия »). Одно из значительных улучшений, которые мы сделали с введением DelphiLSP в 10.4 было необходимо убедиться, что эти показания всегда верны: существует корреляция 1: 1 между маркером в редакторе кода и ошибками компилятора, которые вы увидите, если скомпилируете код, и всеми ошибками, отображаемыми в редакторе и на панели структуры. верны.
В 10.4.2 мы расширили это, так что вы также можете видеть предупреждения и подсказки в редакторе кода. Предупреждения и подсказки предоставляют ценную информацию о вашем коде, проблемах, которые не препятствуют компиляции, но могут помешать вашему приложению работать так, как вы хотите. Отображение их в реальном времени в редакторе по мере набора текста дает вам гораздо более быструю обратную связь и устранение проблем в вашем коде. А для тех, кто предпочитает компилировать без каких-либо предупреждений или намеков — отличная цель — их встроенные функции будут неоценимы.
В 10.4.2 мы не включали это по умолчанию, чтобы редактор кода не был окрашен в несколько цветов для тех, чей код имеет много предупреждений и подсказок. После первоначальной обратной связи с клиентами мы можем включить его по умолчанию в версии 10.5! Но для этого выпуска вы можете включить его на странице «Параметры IDE> Пользовательский интерфейс> Редактор> Язык», на вкладке «Анализ ошибок», в поле со списком «Отображение информации об ошибках»:
Вы можете контролировать, какие уровни Error Insight отображаются и какой другой пользовательский интерфейс Error Insight отображается.
Эта вкладка позволяет вам выбирать между просмотром: только ошибок; ошибки и предупреждения; или ошибки, предупреждения и подсказки. Рекомендуем включить отображение всех трех.
Редактор рендеринга и другие проблемы с именованием
«Error Insight» — отличное название, за исключением того, что теперь его действительно можно было бы назвать Error, Warning и Hint Insight. (Нет, мы не изменили то, как мы называем эту функцию.)
Еще одно замечательное название было «красные волнистые линии»… за исключением того, что теперь это «красные, янтарные и синие волнистые линии». Но это не все. Теперь они могут даже не быть завитками! Департамент наименования вещей Real Good здесь, в Embarcadero, весьма недоволен всеми новыми функциями, которые мы предоставляем вам в этом выпуске. Посмотри на это:
В 10.4.2 мы хотим, чтобы маркеры редактора кода были четкими и легко читаемыми, а также мы знаем, что нашим клиентам часто нравится настраивать IDE в соответствии со своими предпочтениями. По этим причинам у нас есть четыре разных способа визуализации подчеркивания: традиционный зигзаг, но также изогнутая волна (как и в других IDE), линия точек (мой личный фаворит, поскольку я считаю ее сдержанной и элегантной, но все же четкой, и настаиваю на том, что Я вовсе не зацикливаюсь на анализе нескольких пикселей) и сплошной нижней панели. Мы надеемся, что вам понравится настраивать это, и особенно если у вас есть монитор с высоким разрешением или проблемы со зрением, вы найдете стиль маркера, который соответствует вашим потребностям.
Мы также показываем значок в полосе редактора. Это позволяет легко обнаруживать ошибки, предупреждения или подсказки при быстрой прокрутке. Как и другие изменения здесь, это можно контролировать или полностью отключить, если хотите.
Информация в строке состояния редактора и всплывающих подсказках
Если у вас достаточно места по горизонтали, строка состояния в нижней части редактора кода теперь предоставит вам обзор количества ошибок, предупреждений и подсказок в текущем модуле.
Если вы наводите указатель мыши на ошибку (или предупреждение или подсказку), мы также изменили способ ее отображения.
Активность LSP-сервера
Вы когда-нибудь задумывались, что делает движок Code Insight, что он обрабатывает и когда может быть готов дать результаты? В версии 10.4.2 небольшая полоса в нижней части окна «Проекты» отображает активность LSP-сервера.
Унаследовано
В марте 2015 года, за год до того, как я присоединился к Embarcadero, и в то время, когда я понятия не имел, что могу однажды работать здесь, не говоря уже о том, чтобы нести ответственность за эту часть Delphi, я ввел запрос функции Quality Portal RSP-10217 . Это популярный отчет QP с 117 голосами и 41 наблюдателем. Запрос заключался в расширении Ctrl + Click, который переходит к объявлению символа, чтобы вы могли нажать Ctrl + Click на «унаследованном» ключевом слове.
Я очень рад сообщить, что в 10.4.2 эта функция реализована. Вы можете нажать Ctrl + щелчок по ключевому слову «унаследованное» и, если оно указано с таким методом, как «унаследованное создание», также Ctrl + щелчок по имени метода, что также будет восприниматься как переход к этому унаследованному методу.
Почему это такое полезное дополнение? Переход туда, где что-то определено, очень полезен для изучения этого и выяснения того, что он делает, и именно поэтому Ctrl + Click в целом полезен. Но раньше функционал работал только с именами символов. Когда вы вызываете унаследованный метод или, другими словами, вызываете реализацию в классе-предке, это тоже то, к чему вы хотите перейти, чтобы узнать, что он делает: на самом деле это очень полезно, потому что перемещение внутри наследования иерархия важна для понимания вашего объектно-ориентированного кода. Раньше не было возможности найти унаследованный метод. Теперь есть!
Наконец, код, завершающийся после ключевого слова «унаследовано», теперь будет перечислять только методы из классов-предков.
… И качество!
Все вышеперечисленное — это новые функции, иногда действительно интересные новые функции. Но, как я уже упоминал в начале этого поста, 10.4.2 также была качественным релизом. Для DelphiLSP иногда это означало исправление ошибок. Но это также означало пересмотр функций — настройка, корректировка, обеспечение их работы в менее распространенных сценариях, изменение поведения на основе отзывов и многое другое. Вот список лишь некоторых исправлений, настроек, изменений, корректировок и улучшений, которые мы добавили в DelphiLSP в этом выпуске.
Функции завершения кода в блоках IFDEF для встроенных макросов, которые компилятор определил в некоторых ситуациях, таких как UNICODE или MSWINDOWS
Множество улучшений того, какие блоки отображаются при завершении кода в предложении uses (он также будет отображать файлы .pas и .dcu в путях поиска и проекта; вы можете отключить DCU, если вам нужно, в параметрах проекта на уровне платформы) ; плюс единичный «стержень» (например, «Winapi» в «Winapi.Windows») также указан; он даже указывает вам, когда завершаемый вами модуль уже находится в разделе uses!
Множество улучшений в разрешении перегрузки, которые будут видны, если Ctrl + щелчок по перегруженному методу или отображение Parameter Insight при наличии нескольких перегрузок для метода.
Ctrl + щелчок по реализации метода приведет к его объявлению, и наоборот. Навигация с помощью Ctrl + Click также работает для вызовов созданных экземпляров универсальных методов, во многих случаях для символов в неправильном (некомпилируемом) коде; и по аргументу встроенного выхода; плюс улучшения, использующие его в разделе uses
Множество улучшений для дженериков, включая завершение в общих классах, показывающих строгие частные / защищенные символы; найти поля и свойства поиска объявлений в универсальных типах; поиск общих методов в другом модуле; и больше
Множество улучшений, завершение и переход к: атрибутам; перечисления с областью видимости (они будут отображать и завершать перечисление со своей областью видимости); перечисление строк ресурсов; переход к свойствам и средствам получения / установки свойств; и больше
Улучшения отображения документации, включая отображение XMLDoc во время завершения параметров
Множество настроек производительности. Даже исполняемый файл теперь имеет меньший размер
И это не так — во всем DelphiLSP есть еще много настроек, изменений и исправлений качества. Вышеупомянутое составляет, возможно, четверть списка, и вы заметите, что многие точки покрывают несколько элементов. У каждого есть другие элементы — например, есть больше настроек для обработки поиска .pas и .dcu, которые не упоминаются, или больше настроек, связанных с завершением параметров, или настроек того, как IDE вставляет текст, или …
Я хотел бы сообщить, насколько много было переработано и улучшено в версии 10.4.2. Многие из вышеперечисленных элементов вы могли не заметить: это небольшие улучшения. По общему мнению, автозавершение кода и связанные с ним функции работают только тогда, когда вы от них ожидаете, как вы и ожидаете.
Обзор
Мало того, что Code Insight в Delphi и RAD Studio 10.4.2 поставляется с некоторыми действительно полезными новыми функциями, в том числе часто запрашиваемыми — предупреждениями и подсказками в редакторе! Ctrl + щелчок по «унаследованному»! Посмотрите, что делает LSP-сервер! — вся функция имеет много качественных исправлений. Отзывы, которые мы получили до сих пор, были очень благоприятными, и мы настоятельно рекомендуем вам установить 10.4.2 как можно скорее.
O RAD Studio 10.4.2 foi planejado como uma continuação focada em recursos da versão 10.4.1 focada na qualidade. No entanto, além de fornecer alguns recursos principais, também corrigimos mais problemas na versão 10.4.2 do que na versão anterior!
Isso se aplica tanto ao Code Insight, ou DelphiLSP, quanto a outras partes do Delphi 10.4.2. Vamos dar uma olhada no que há de novo. Primeiro, os recursos …
Error Insight – agora Error, Warning and Hint Insight
Por muitos anos, você foi capaz de ver erros de código detectados com antecedência antes de compilar, mostrados por meio de um sublinhado em zigue-zague vermelho no editor de código (um ‘rabisco vermelho’.) Uma das grandes melhorias que fizemos com a introdução do DelphiLSP em 10.4 era para garantir que essas indicações estivessem sempre corretas: há uma correlação de 1: 1 entre o marcador no editor de código e os erros do compilador que você veria se compilasse o código, e todos os erros mostrados no editor e no painel Estrutura estão corretas.
Em 10.4.2, estendemos isso para que você possa ver avisos e dicas no editor de código também. Avisos e dicas fornecem informações valiosas sobre seu código, problemas que não impedem a compilação, mas podem impedir que seu aplicativo seja executado da maneira que você deseja. Mostrar essas informações ao vivo no editor à medida que você digita fornece feedback e tempo de execução muito mais rápidos para corrigir problemas em seu código. E para aqueles que preferem compilar sem quaisquer avisos ou dicas – um grande objetivo – vê-los embutidos será inestimável.
Em 10.4.2, não habilitamos isso por padrão, para que o editor de código não fosse coberto em várias cores para aqueles cujo código tem muitos avisos e dicas. Após o feedback inicial do cliente, podemos ativá-lo por padrão em 10.5! Mas para esta versão, você pode ativá-lo em Opções do IDE> Interface do usuário> Editor> página Idioma, guia ‘Error Insight’, caixa de combinação ‘Error Insight Display’:
Você pode controlar quais níveis do Error Insight são exibidos e quais outras IU do Error Insight são mostradas
Esta guia permite que você escolha entre ver: somente erros; erros e avisos; ou erros, avisos e dicas. Recomendamos que você ative a exibição de todos os três.
Renderização do editor e outros problemas de nomenclatura
‘Error Insight’ é um ótimo nome – exceto que agora ele realmente poderia ser chamado de Error, Warning and Hint Insight. (Não, não mudamos a forma como nos referimos ao recurso.)
Outro grande nome era ‘rabiscos vermelhos’ … exceto que agora são ‘rabiscos vermelhos, âmbar e azuis.’ Mas isso não é tudo. Agora eles podem nem mesmo ser rabiscos! O Departamento de Nomeação de Coisas Real Boas aqui na Embarcadero está bastante descontente com todos os novos recursos que estamos fornecendo a você com esta versão. Veja isso:
Em 10.4.2, queremos garantir que os marcadores do editor de código sejam claros e fáceis de ver, além disso, sabemos que nossos clientes costumam gostar de personalizar o IDE de acordo com suas próprias preferências. Por essas razões, temos quatro maneiras diferentes de renderizar o sublinhado: o ziguezague tradicional, mas também uma onda curva (como outros IDEs), uma linha de pontos (meu favorito, pois acho que é discreto e elegante, mas ainda claro, e insisto Não estou pensando demais em uma análise de alguns pixels), e uma barra inferior sólida. Esperamos que você goste de configurar isso e, especialmente se tiver um monitor de alta resolução ou problemas de visão, encontrará o estilo de marcador que atende às suas necessidades.
Também mostramos um ícone na calha do editor. Isso facilita a localização de erros, avisos ou dicas durante a rolagem rápida. Como as outras alterações aqui, isso pode ser controlado ou totalmente desativado, se desejar.
Insight na barra de status do Editor e dicas de ferramentas
Se você tiver espaço horizontal suficiente, a barra de status na parte inferior do editor de código agora lhe dará uma visão geral do número de erros, avisos e dicas na unidade atual.
Se você passar o mouse sobre um erro (ou aviso ou dica), também ajustamos a forma como ele é exibido.
Atividade do servidor LSP
Você já se perguntou o que o mecanismo Code Insight está fazendo, o que está processando e quando pode estar pronto para fornecer resultados? Em 10.4.2, uma pequena barra na parte inferior da visualização Projetos lista a atividade do servidor LSP.
Herdado
Em março de 2015, mais de um ano antes de entrar na Embarcadero e em um momento em que eu não tinha ideia de que poderia trabalhar aqui um dia, muito menos ser responsável por essa parte da Delphi, entrei na solicitação de recurso do Portal da Qualidade RSP-10217 . É um relatório QP popular com 117 votos e 41 observadores. O pedido era para estender Ctrl + Click, que navega para a declaração de um símbolo, para permitir que você use Ctrl + Click na palavra-chave ‘herdada’.
Estou muito feliz em dizer que no 10.4.2 esse recurso foi implementado. Você pode Ctrl + Clique na palavra-chave ‘herdado’ e, se qualificado com um método como ‘Criar herdado’, também Ctrl + Clique no nome do método, que também será entendido como navegação para aquele método herdado.
Por que isso é uma adição tão útil? Navegar até onde algo está definido é muito útil para aprender sobre isso e descobrir o que ele faz, e é por isso que Ctrl + Clique em geral é útil. Mas a funcionalidade costumava funcionar apenas em nomes de símbolos. Quando você invoca um método herdado, ou em outras palavras, invoca a implementação em uma classe ancestral, isso também é algo que você deseja navegar para descobrir o que ele faz: na verdade, isso é altamente útil porque se move dentro de uma herança a hierarquia é importante para entender seu código orientado a objetos. Costumava não haver maneira de encontrar o método herdado. Agora existe!
Em um toque final, o código concluído após a palavra-chave ‘herdada’ agora listará apenas métodos de classes ancestrais.
… E qualidade!
Todos os itens acima são novos recursos, às vezes novos recursos realmente interessantes. Mas, como mencionei no início deste post, 10.4.2 também foi um lançamento de grande qualidade. Para DelphiLSP, às vezes isso significa corrigir bugs. Mas também significa revisar recursos – ajustes, ajustes, garantindo que funcionem em cenários menos comuns, mudança de comportamento com base no feedback e muito mais. Aqui está uma lista de apenas algumas das correções, ajustes, mudanças, ajustes e polimento que adicionamos ao DelphiLSP nesta versão.
Funções de preenchimento de código em blocos IFDEF para macros integradas que o compilador definiu em algumas situações, como UNICODE ou MSWINDOWS
Muitas melhorias nas unidades que estão sendo mostradas ao completar o código na cláusula de uso (ele também mostrará arquivos .pas e .dcu nos caminhos de pesquisa e de projeto; você pode desabilitar DCUs se precisar nas Opções de Projeto em um nível por plataforma) ; mais uma unidade ‘haste’ (como ‘Winapi’ em ‘Winapi.Windows’) também é listada; até indica quando uma unidade que você está completando já está na cláusula de uso!
Muitas melhorias na resolução de sobrecarga, que serão visíveis ao clicar em Ctrl + clique em um método sobrecarregado ou ao exibir o Parameter Insight quando houver múltiplas sobrecargas para um método
Ctrl + clicar em uma implementação de método irá para sua declaração e vice-versa. A navegação Ctrl + Clique também funciona para chamadas a métodos genéricos instanciados, em muitos casos em símbolos em código incorreto (não compilável); e no argumento da Saída embutida; além de melhorias usando-o em uma cláusula de uso
Muitas melhorias para genéricos, incluindo conclusão em classes genéricas mostrando símbolos privados / protegidos estritos; encontrar declaração encontrando campos e propriedades em tipos genéricos; encontrar métodos genéricos em outra unidade; e mais
Muitas melhorias completando e navegando para: atributos; enums com escopo (eles exibirão e completarão a enumeração com seu escopo); listar strings de recursos; navegar para propriedades e getter / setters de propriedade; e mais
Melhorias na exibição da documentação, incluindo a exibição de XMLDoc durante a conclusão do parâmetro
Muitos ajustes de desempenho. Até o executável é menor agora.
E não é isso – há muitos, muitos mais ajustes, mudanças e correções de qualidade em todo o DelphiLSP. O texto acima é talvez um quarto da lista, e você notará que muitos pontos cobrem vários itens. Cada um tem outros itens – há mais ajustes para lidar com a pesquisa .pas e .dcu, por exemplo, que não são mencionados, ou mais ajustes em torno da conclusão de parâmetros, ou ajustes em torno de como o IDE insere texto, ou …
A impressão que gostaria de comunicar é o quanto foi revisado e aprimorado em 10.4.2. Muitos dos itens acima você pode não notar: eles são melhorias sutis. A sensação geral é que o auto-completar de código e os recursos relacionados funcionam apenas quando você espera, como você espera que funcionem.
Visão geral
O Code Insight no Delphi e no RAD Studio 10.4.2 não só vem com alguns recursos novos realmente úteis, incluindo os comumente solicitados – avisos e dicas no editor! Ctrl + clique em ‘herdado’! Veja o que o servidor LSP está fazendo! – todo o recurso tem muitas revisões de qualidade. O feedback que recebemos até agora tem sido muito favorável e recomendamos que você instale o 10.4.2 assim que possível.
RAD Studio 10.4.2 se planeó como un seguimiento centrado en funciones del lanzamiento centrado en la calidad de 10.4.1. Sin embargo, además de ofrecer algunas funciones importantes, también solucionamos más problemas en 10.4.2 que en la versión anterior.
Esto se aplica tanto a Code Insight, o DelphiLSP, como a otras partes de Delphi 10.4.2. Echemos un vistazo a las novedades. Primero, las características …
Error Insight: ahora Error, Advertencia y Sugerencia Insight
Durante muchos años, ha podido ver los errores de código detectados con anticipación antes de compilar, que se muestran a través de un subrayado en zigzag rojo en el editor de código (un ‘ondulado rojo’). Una de las grandes mejoras que hicimos con la introducción de DelphiLSP en 10.4 era para asegurarse de que estas indicaciones fueran siempre correctas: hay una correlación 1: 1 entre el marcador en el editor de código y los errores del compilador que vería si compilara el código, y todos los errores mostrados en el editor y el panel Estructura son correctos.
En 10.4.2 hemos ampliado esto para que también pueda ver advertencias y sugerencias en el editor de código. Las advertencias y sugerencias brindan información valiosa sobre su código, problemas que no impedirán la compilación pero pueden impedir que su aplicación se ejecute de la manera deseada. Mostrarlos en vivo en el editor a medida que escribe le brinda retroalimentación y respuesta mucho más rápidas para solucionar problemas en su código. Y para aquellos que prefieren compilar sin advertencias o pistas, un gran objetivo, verlos en línea será invaluable.
En 10.4.2, no habilitamos esto de forma predeterminada, por lo que el editor de código no estaría cubierto en varios colores para aquellos cuyo código tiene muchas advertencias y sugerencias. Después de los comentarios iniciales de los clientes, ¡podemos activarlo de forma predeterminada en 10.5! Pero para esta versión, puede activarlo en Opciones de IDE> Interfaz de usuario> Editor> Página de idioma, pestaña ‘Error Insight’, cuadro combinado ‘Error Insight Display’:
Puede controlar qué niveles de Error Insight se muestran y qué otra interfaz de usuario de Error Insight se muestra
Esta pestaña le permite elegir entre ver: solo errores; errores y advertencias; o errores, advertencias y sugerencias. Le recomendamos que active la visualización de los tres.
Representación del editor y otros problemas de nomenclatura
‘Error Insight’ es un gran nombre, excepto que ahora realmente podría llamarse Error, Warning y Hint Insight. (No, no hemos cambiado la forma en que nos referimos a la función).
Otro gran nombre era ‘garabatos rojos’ … excepto que ahora es ‘garabatos rojos, ámbar y azules’. Pero eso no es todo. ¡Ahora puede que ni siquiera sean garabatos! El Departamento de Nombrar Cosas Realmente Bien aquí en Embarcadero está bastante descontento con todas las nuevas funciones que le brindamos con esta versión. Mira esto:
En 10.4.2, queremos asegurarnos de que los marcadores del editor de código sean claros y fáciles de ver, además sabemos que a nuestros clientes a menudo les gusta personalizar el IDE según sus propias preferencias. Por esas razones, tenemos cuatro formas diferentes de representar el subrayado: el zigzag tradicional, pero también una onda curva (como otros IDE), una línea de puntos (mi favorito personal, ya que creo que es sobrio y elegante pero aún claro, e insisto No estoy pensando demasiado en un análisis de unos pocos píxeles) y una barra inferior sólida. Esperamos que disfrute configurando esto, y especialmente si tiene un monitor de alta resolución o problemas de visión, encontrará el estilo de marcador que se adapta a sus necesidades.
También mostramos un icono en el canal del editor. Esto facilita la detección de errores, advertencias o sugerencias cuando se desplaza rápidamente. Al igual que los otros cambios aquí, esto puede controlarse o desactivarse por completo si lo desea.
Información sobre la barra de estado del editor y la información sobre herramientas
Si tiene suficiente espacio horizontal, la barra de estado en la parte inferior del editor de código ahora le dará una descripción general del número de errores, advertencias y sugerencias en la unidad actual.
Si pasa el mouse sobre un error (o advertencia o sugerencia), también modificamos la forma en que se muestra.
Actividad del servidor LSP
¿Alguna vez se ha preguntado qué está haciendo el motor de Code Insight, qué está procesando y cuándo podría estar listo para dar resultados? En 10.4.2, una pequeña barra en la parte inferior de la vista Proyectos enumera la actividad del servidor LSP.
Heredado
En marzo de 2015, más de un año antes de unirme a Embarcadero y en un momento en el que no tenía idea de que podría trabajar aquí algún día, y mucho menos ser responsable de esta parte de Delphi, ingresé a la solicitud de función del portal de calidad RSP-10217 . Es un informe de QP popular con 117 votos y 41 observadores. La solicitud fue extender Ctrl + Click, que navega a la declaración de un símbolo, para permitirle Ctrl + Click en la palabra clave ‘heredada’.
Estoy muy feliz de decir que en 10.4.2 esta función está implementada. Puede Ctrl + clic en la palabra clave ‘heredado’ y, si está calificado con un método como ‘crear heredado’, también Ctrl + clic en el nombre del método, que también se entenderá como navegar a ese método heredado.
¿Por qué es esta una adición tan útil? Navegar hasta donde se define algo es muy útil para aprender sobre él y descubrir lo que hace, y es por eso que Ctrl + Click en general es útil. Pero la funcionalidad solía funcionar solo en nombres de símbolos. Cuando invoca un método heredado, o en otras palabras, invoca la implementación en una clase ancestral, eso también es algo a lo que desea poder navegar para descubrir qué hace: de hecho, esto es muy útil porque moverse dentro de una herencia La jerarquía es importante para comprender su código orientado a objetos. No solía haber forma de encontrar el método heredado. ¡Ahora hay!
En un toque final, el código que se completa después de la palabra clave ‘heredada’ ahora solo enumerará los métodos de las clases ancestrales.
… ¡y calidad!
Todas las anteriores son características nuevas, a veces características nuevas realmente interesantes. Pero como mencioné al comienzo de esta publicación, 10.4.2 también fue una versión de gran calidad. Para DelphiLSP, a veces esto ha significado corregir errores. Pero también significa revisar funciones: ajustar, ajustar, garantizar que funcionen en escenarios menos comunes, cambiar el comportamiento en función de los comentarios y más. Aquí hay una lista de algunas de las correcciones, ajustes, cambios, ajustes y pulidos que hemos agregado a DelphiLSP en esta versión.
Funciones de finalización de código en bloques IFDEF para macros incorporadas que el compilador ha definido en algunas situaciones, como UNICODE o MSWINDOWS
Muchas mejoras en las unidades que se muestran cuando se completa el código en la cláusula de usos (también mostrará los archivos .pas y .dcu en las rutas de búsqueda y proyecto; puede deshabilitar las DCU si lo necesita en las Opciones del proyecto en un nivel por plataforma) ; más una unidad ‘raíz’ (como ‘Winapi’ en ‘Winapi.Windows’) también aparece en la lista; ¡incluso le indica cuando una unidad que está completando ya está en la cláusula de usos!
Muchas mejoras en la resolución de sobrecargas, que serán visibles al presionar Ctrl + clic en un método sobrecargado, o mostrar Parameter Insight cuando hay múltiples sobrecargas para un método
Ctrl + clic en la implementación de un método irá a su declaración y viceversa. La navegación Ctrl + Click también funciona para llamadas a métodos genéricos instanciados, en muchos casos en símbolos en código incorrecto (no compilable); y en el argumento incorporado de la salida; además de mejoras al usarlo en una cláusula de usos
Muchas mejoras para genéricos, incluida la finalización en clases genéricas que muestran símbolos estrictamente privados / protegidos; buscar campos y propiedades de búsqueda de declaraciones en tipos genéricos; encontrar métodos genéricos en otra unidad; y más
Muchas mejoras que completan y navegan a: atributos; enumeraciones con ámbito (mostrarán y completarán la enumeración con su ámbito); enumerar cadenas de recursos; navegar a propiedades y captadores / definidores de propiedades; y más
Mejoras en la visualización de la documentación, incluida la visualización de XMLDoc durante la finalización de los parámetros
Muchos ajustes de rendimiento. Incluso el ejecutable tiene un tamaño más pequeño ahora.
Y eso no es todo: hay muchos, muchos más ajustes, cambios y correcciones de calidad en todo DelphiLSP. Lo anterior es quizás una cuarta parte de la lista, y notará que muchos puntos cubren varios elementos. Cada uno tiene otros elementos: hay más ajustes para manejar la búsqueda de .pas y .dcu, por ejemplo, que no se mencionan, o más ajustes sobre la finalización de parámetros, o ajustes sobre cómo el IDE inserta texto, o …
La impresión que me gustaría comunicar es lo mucho que se ha revisado y mejorado en 10.4.2. Es posible que no note muchos de los elementos anteriores: son mejoras sutiles. El sentido general es que la finalización del código y las características relacionadas funcionan cuando se espera que lo hagan, como se espera.
Visión de conjunto
Code Insight en Delphi y RAD Studio 10.4.2 no solo vienen con algunas características nuevas realmente útiles, incluidas las solicitadas comúnmente: ¡advertencias y sugerencias en el editor! Ctrl + clic en ‘heredado’! ¡Vea lo que está haciendo el servidor LSP! – toda la función tiene muchas revisiones de calidad. Los comentarios que hemos recibido hasta ahora han sido muy favorables y le recomendamos encarecidamente que instale 10.4.2 lo antes posible.
RAD Studio 10.4.2 wurde als funktionsorientierte Fortsetzung der qualitätsorientierten Version 10.4.1 geplant. Neben einigen wichtigen Funktionen haben wir in 10.4.2 auch mehr Probleme behoben als in der vorherigen Version!
Dies gilt sowohl für Code Insight oder DelphiLSP als auch für andere Teile von Delphi 10.4.2. Schauen wir uns an, was es Neues gibt. Erstens die Funktionen …
Error Insight – jetzt Error, Warning und Hint Insight
Seit vielen Jahren können Sie feststellen, dass Codefehler vor dem Kompilieren erkannt wurden. Dies wird durch eine rote Zickzack-Unterstreichung im Code-Editor angezeigt (ein „roter Schnörkel“). Eine der großen Verbesserungen, die wir mit der Einführung von DelphiLSP vorgenommen haben In 10.4 sollte sichergestellt werden, dass diese Angaben immer korrekt sind: Es gibt eine 1: 1-Korrelation zwischen der Markierung im Code-Editor und den Compiler-Fehlern, die beim Kompilieren des Codes angezeigt werden, und allen im Editor und im Strukturbereich angezeigten Fehlern sind richtig.
In 10.4.2 haben wir dies erweitert, sodass Sie Warnungen und Hinweise auch im Code-Editor sehen können. Warnungen und Hinweise enthalten wertvolle Informationen zu Ihrem Code. Probleme, die das Kompilieren nicht verhindern, aber möglicherweise verhindern, dass Ihre App wie gewünscht ausgeführt wird. Wenn Sie diese während der Eingabe live im Editor anzeigen, erhalten Sie viel schnelleres Feedback und eine schnellere Bearbeitung, um Probleme in Ihrem Code zu beheben. Und für diejenigen, die es vorziehen, ohne Warnungen oder Hinweise zu kompilieren – ein großartiges Ziel – ist es von unschätzbarem Wert, sie inline zu sehen.
In 10.4.2 haben wir dies nicht standardmäßig aktiviert, sodass der Code-Editor für diejenigen, deren Code viele Warnungen und Hinweise enthält, nicht in mehreren Farben angezeigt wird. Nach dem ersten Kundenfeedback können wir es standardmäßig in 10.5 aktivieren! Für diese Version können Sie sie jedoch auf der Seite IDE-Optionen> Benutzeroberfläche> Editor> Sprache, Registerkarte ‚Error Insight‘, Kombinationsfeld ‚Error Insight Display‘ aktivieren:
Sie können steuern, welche Error Insight-Ebenen angezeigt werden und welche andere Error Insight-Benutzeroberfläche angezeigt wird
Auf dieser Registerkarte können Sie wählen, ob Folgendes angezeigt werden soll: nur Fehler; Fehler und Warnungen; oder Fehler, Warnungen und Hinweise. Wir empfehlen, alle drei einzuschalten.
Editor-Rendering und andere Probleme beim Benennen
‚Error Insight‘ ist ein großartiger Name – außer jetzt könnte er wirklich als Error, Warning und Hint Insight bezeichnet werden. (Nein, wir haben nicht geändert, wie wir auf die Funktion verweisen.)
Ein anderer großer Name war „rote Kringel“ … außer das ist jetzt „rote, bernsteinfarbene und blaue Kringel“. Aber das ist nicht alles. Jetzt sind sie vielleicht gar keine Kringel mehr! Die Abteilung für die Benennung von Dingen, die hier bei Embarcadero wirklich gut sind, ist mit all den neuen Funktionen, die wir Ihnen mit dieser Version zur Verfügung stellen, ziemlich unzufrieden. Schau dir das an:
In 10.4.2 möchten wir sicherstellen, dass die Markierungen des Code-Editors klar und leicht erkennbar sind. Außerdem wissen wir, dass unsere Kunden die IDE häufig an ihre eigenen Vorlieben anpassen möchten. Aus diesen Gründen haben wir vier verschiedene Möglichkeiten, die Unterstreichung zu rendern: den traditionellen Zickzack, aber auch eine gekrümmte Welle (wie andere IDEs), eine Punktlinie (mein persönlicher Favorit, da ich denke, dass sie zurückhaltend und elegant ist, aber immer noch klar und darauf besteht Ich überdenke überhaupt keine Analyse von ein paar Pixeln und eine solide Unterleiste. Wir hoffen, dass Ihnen die Konfiguration Spaß macht, und insbesondere, wenn Sie Probleme mit einem hochauflösenden Monitor oder Sehvermögen haben, finden Sie den Markierungsstil, der Ihren Anforderungen entspricht.
Wir zeigen auch ein Symbol in der Editor-Rinne. Dies macht es einfach, Fehler, Warnungen oder Hinweise beim schnellen Scrollen zu erkennen. Wie die anderen Änderungen hier kann dies gesteuert oder vollständig ausgeschaltet werden, wenn Sie dies wünschen.
Einblicke in die Statusleiste des Editors und in die QuickInfos
Wenn Sie genügend horizontalen Raum haben, gibt Ihnen die Statusleiste am unteren Rand des Code-Editors jetzt einen Überblick über die Anzahl der Fehler, Warnungen und Hinweise in der aktuellen Einheit.
Wenn Sie mit der Maus über einen Fehler (oder eine Warnung oder einen Hinweis) fahren, haben wir auch die Anzeige optimiert.
LSP-Serveraktivität
Haben Sie sich jemals gefragt, was die Code Insight-Engine tut, was sie verarbeitet und wann sie möglicherweise bereit ist, Ergebnisse zu liefern? In 10.4.2 listet eine kleine Leiste am unteren Rand der Projektansicht die Aktivität des LSP-Servers auf.
Vererbt
Im März 2015, über ein Jahr bevor ich zu Embarcadero kam und zu einer Zeit, als ich keine Ahnung hatte, dass ich eines Tages hier arbeiten könnte, geschweige denn für diesen Teil von Delphi verantwortlich sein könnte, gab ich die Qualitätsportal-Funktionsanforderung RSP-10217 ein . Es ist ein beliebter QP-Bericht mit 117 Stimmen und 41 Beobachtern. Die Anforderung bestand darin, Strg + Klick zu erweitern, wodurch zur Deklaration eines Symbols navigiert wird, damit Sie bei gedrückter Strg + Klick-Taste auf das geerbte Schlüsselwort klicken können.
Ich freue mich sehr sagen zu können, dass diese Funktion in 10.4.2 implementiert ist. Sie können Strg + Klicken Sie auf das Schlüsselwort ‚geerbt‘ und, wenn Sie mit einer Methode wie ‚geerbt erstellen‘ qualifiziert sind, auch Strg + Klicken Sie auf den Methodennamen, der auch als Navigation zu dieser geerbten Methode verstanden wird.
Warum ist das so eine nützliche Ergänzung? Das Navigieren zu einem definierten Ort ist sehr hilfreich, um mehr darüber zu erfahren und herauszufinden, was es tut. Deshalb ist Strg + Klicken im Allgemeinen hilfreich. Die Funktionalität wurde jedoch nur für Symbolnamen verwendet. Wenn Sie eine geerbte Methode aufrufen oder mit anderen Worten die Implementierung in einer Vorgängerklasse aufrufen, möchten Sie auch dazu navigieren können, um herauszufinden, was sie bewirkt. Dies ist in der Tat sehr nützlich, da Sie sich innerhalb einer Vererbung bewegen Hierarchie ist wichtig für das Verständnis Ihres objektorientierten Codes. Früher gab es keine Möglichkeit, die geerbte Methode zu finden. Jetzt gibt es da!
Abschließend werden beim Code, der nach dem Schlüsselwort ‚geerbt‘ vervollständigt wird, nur noch Methoden aus Vorfahrenklassen aufgelistet.
… Und Qualität!
Alle oben genannten sind neue Funktionen, manchmal wirklich nette neue Funktionen. Aber wie ich zu Beginn dieses Beitrags erwähnt habe, war 10.4.2 auch eine großartige Qualitätsversion. Für DelphiLSP bedeutete dies manchmal, Fehler zu beheben. Es bedeutet aber auch, die Funktionen zu überarbeiten – zu optimieren, anzupassen, sicherzustellen, dass sie in weniger gängigen Szenarien funktionieren, das Verhalten basierend auf Feedback zu ändern und vieles mehr. Hier ist eine Liste nur einiger Korrekturen, Optimierungen, Änderungen, Anpassungen und Verbesserungen, die wir DelphiLSP in dieser Version hinzugefügt haben.
Code-Vervollständigungsfunktionen in IFDEF-Blöcken für integrierte Makros, die der Compiler in bestimmten Situationen definiert hat, z. B. UNICODE oder MSWINDOWS
Viele Verbesserungen, welche Einheiten beim Vervollständigen von Code in der Verwendungsklausel angezeigt werden (es werden auch .pas- und .dcu-Dateien in den Such- und Projektpfaden angezeigt; Sie können DCUs deaktivieren, wenn Sie dies in den Projektoptionen auf Plattformebene benötigen). ;; Außerdem wird eine Einheit ’stem‘ (wie ‚Winapi‘ in ‚Winapi.Windows‘) aufgelistet. Es zeigt Ihnen sogar an, wann eine Einheit, die Sie fertigstellen, bereits in der Verwendungsklausel enthalten ist!
Viele Verbesserungen der Überlastungsauflösung, die sichtbar werden, wenn Sie bei gedrückter Strg-Taste auf eine überladene Methode klicken oder Parameter Insight anzeigen, wenn eine Methode mehrere Überladungen aufweist
Wenn Sie bei gedrückter Strg-Taste auf eine Methodenimplementierung klicken, wird deren Deklaration aufgerufen und umgekehrt. Die Strg + Klick-Navigation funktioniert auch bei Aufrufen instanziierter generischer Methoden, in vielen Fällen bei Symbolen in falschem (nicht kompilierbarem) Code. und auf das Argument des eingebauten Ausgangs; plus Verbesserungen bei der Verwendung in einer Verwendungsklausel
Viele Verbesserungen für Generika, einschließlich des Abschlusses in generischen Klassen mit strengen privaten / geschützten Symbolen; Suchen von Deklarationsfindungsfeldern und -eigenschaften in generischen Typen; generische Methoden in einer anderen Einheit finden; und mehr
Viele Verbesserungen vervollständigen und navigieren zu: Attributen; Gültigkeitsbereichsaufzählungen (sie zeigen die Aufzählung an und vervollständigen sie mit ihrem Gültigkeitsbereich); Auflisten von Ressourcenzeichenfolgen; Navigieren zu Eigenschaften und Property Getter / Setter; und mehr
Verbesserungen der Dokumentationsanzeige, einschließlich der Anzeige von XMLDoc während der Parametervervollständigung
Viele Leistungsverbesserungen. Sogar die ausführbare Datei ist jetzt kleiner.
Und das ist es nicht – es gibt viele, viele weitere Optimierungen, Änderungen und Qualitätskorrekturen in ganz DelphiLSP. Das Obige ist vielleicht ein Viertel der Liste, und Sie werden feststellen, dass viele Punktpunkte mehrere Elemente abdecken. Jeder hat andere Elemente – es gibt mehr Verbesserungen bei der Handhabung von .pas und .dcu, die beispielsweise nicht erwähnt werden, oder weitere Verbesserungen bei der Vervollständigung von Parametern oder Verbesserungen bei der Einfügung von Text durch die IDE oder…
Der Eindruck, den ich mitteilen möchte, ist, wie viel in 10.4.2 überarbeitet und verbessert wurde. Viele der oben genannten Punkte werden Sie möglicherweise nicht bemerken: Es handelt sich um subtile Verbesserungen. Der allgemeine Sinn ist, dass die Code-Vervollständigung und die zugehörigen Funktionen nur dann funktionieren, wenn Sie dies erwarten, wie Sie es erwarten.
Überblick
Code Insight in Delphi und RAD Studio 10.4.2 enthält nicht nur einige wirklich nützliche neue Funktionen, einschließlich häufig angeforderter – Warnungen und Hinweise im Editor! Strg + Klick auf ‚geerbt‘! Sehen Sie, was der LSP-Server tut! – Das gesamte Feature hat viele Qualitätsrevisionen. Das Feedback, das wir bisher erhalten haben, war sehr positiv und wir empfehlen Ihnen dringend, 10.4.2 so bald wie möglich zu installieren.
Machine learning algorithms have hyperparameters that allow the algorithms to be tailored to specific datasets.
Although the impact of hyperparameters may be understood generally, their specific effect on a dataset and their interactions during learning may not be known. Therefore, it is important to tune the values of algorithm hyperparameters as part of a machine learning project.
It is common to use naive optimization algorithms to tune hyperparameters, such as a grid search and a random search. An alternate approach is to use a stochastic optimization algorithm, like a stochastic hill climbing algorithm.
In this tutorial, you will discover how to manually optimize the hyperparameters of machine learning algorithms.
After completing this tutorial, you will know:
Stochastic optimization algorithms can be used instead of grid and random search for hyperparameter optimization.
How to use a stochastic hill climbing algorithm to tune the hyperparameters of the Perceptron algorithm.
How to manually optimize the hyperparameters of the XGBoost gradient boosting algorithm.
Let’s get started.
How to Manually Optimize Machine Learning Model Hyperparameters Photo by john farrell macdonald, some rights reserved.
Tutorial Overview
This tutorial is divided into three parts; they are:
Manual Hyperparameter Optimization
Perceptron Hyperparameter Optimization
XGBoost Hyperparameter Optimization
Manual Hyperparameter Optimization
Machine learning models have hyperparameters that you must set in order to customize the model to your dataset.
Often, the general effects of hyperparameters on a model are known, but how to best set a hyperparameter and combinations of interacting hyperparameters for a given dataset is challenging.
A better approach is to objectively search different values for model hyperparameters and choose a subset that results in a model that achieves the best performance on a given dataset. This is called hyperparameter optimization, or hyperparameter tuning.
A range of different optimization algorithms may be used, although two of the simplest and most common methods are random search and grid search.
Random Search. Define a search space as a bounded domain of hyperparameter values and randomly sample points in that domain.
Grid Search. Define a search space as a grid of hyperparameter values and evaluate every position in the grid.
Grid search is great for spot-checking combinations that are known to perform well generally. Random search is great for discovery and getting hyperparameter combinations that you would not have guessed intuitively, although it often requires more time to execute.
For more on grid and random search for hyperparameter tuning, see the tutorial:
Grid and random search are primitive optimization algorithms, and it is possible to use any optimization we like to tune the performance of a machine learning algorithm. For example, it is possible to use stochastic optimization algorithms. This might be desirable when good or great performance is required and there are sufficient resources available to tune the model.
Next, let’s look at how we might use a stochastic hill climbing algorithm to tune the performance of the Perceptron algorithm.
It is a model of a single neuron that can be used for two-class classification problems and provides the foundation for later developing much larger networks.
In this section, we will explore how to manually optimize the hyperparameters of the Perceptron model.
First, let’s define a synthetic binary classification problem that we can use as the focus of optimizing the model.
We can use the make_classification() function to define a binary classification problem with 1,000 rows and five input variables.
The example below creates the dataset and summarizes the shape of the data.
# define a binary classification dataset
from sklearn.datasets import make_classification
# define dataset
X, y = make_classification(n_samples=1000, n_features=5, n_informative=2, n_redundant=1, random_state=1)
# summarize the shape of the dataset
print(X.shape, y.shape)
Running the example prints the shape of the created dataset, confirming our expectations.
(1000, 5) (1000,)
The scikit-learn provides an implementation of the Perceptron model via the Perceptron class.
Before we tune the hyperparameters of the model, we can establish a baseline in performance using the default hyperparameters.
The complete example of evaluating the Perceptron model with default hyperparameters on our synthetic binary classification dataset is listed below.
# perceptron default hyperparameters for binary classification
from numpy import mean
from numpy import std
from sklearn.datasets import make_classification
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import RepeatedStratifiedKFold
from sklearn.linear_model import Perceptron
# define dataset
X, y = make_classification(n_samples=1000, n_features=5, n_informative=2, n_redundant=1, random_state=1)
# define model
model = Perceptron()
# define evaluation procedure
cv = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=1)
# evaluate model
scores = cross_val_score(model, X, y, scoring='accuracy', cv=cv, n_jobs=-1)
# report result
print('Mean Accuracy: %.3f (%.3f)' % (mean(scores), std(scores)))
Running the example reports evaluates the model and reports the mean and standard deviation of the classification accuracy.
Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.
In this case, we can see that the model with default hyperparameters achieved a classification accuracy of about 78.5 percent.
We would hope that we can achieve better performance than this with optimized hyperparameters.
Mean Accuracy: 0.786 (0.069)
Next, we can optimize the hyperparameters of the Perceptron model using a stochastic hill climbing algorithm.
There are many hyperparameters that we could optimize, although we will focus on two that perhaps have the most impact on the learning behavior of the model; they are:
Learning Rate (eta0).
Regularization (alpha).
The learning rate controls the amount the model is updated based on prediction errors and controls the speed of learning. The default value of eta is 1.0. reasonable values are larger than zero (e.g. larger than 1e-8 or 1e-10) and probably less than 1.0
By default, the Perceptron does not use any regularization, but we will enable “elastic net” regularization which applies both L1 and L2 regularization during learning. This will encourage the model to seek small model weights and, in turn, often better performance.
We will tune the “alpha” hyperparameter that controls the weighting of the regularization, e.g. the amount it impacts the learning. If set to 0.0, it is as though no regularization is being used. Reasonable values are between 0.0 and 1.0.
First, we need to define the objective function for the optimization algorithm. We will evaluate a configuration using mean classification accuracy with repeated stratified k-fold cross-validation. We will seek to maximize accuracy in the configurations.
The objective() function below implements this, taking the dataset and a list of config values. The config values (learning rate and regularization weighting) are unpacked, used to configure the model, which is then evaluated, and the mean accuracy is returned.
# objective function
def objective(X, y, cfg):
# unpack config
eta, alpha = cfg
# define model
model = Perceptron(penalty='elasticnet', alpha=alpha, eta0=eta)
# define evaluation procedure
cv = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=1)
# evaluate model
scores = cross_val_score(model, X, y, scoring='accuracy', cv=cv, n_jobs=-1)
# calculate mean accuracy
result = mean(scores)
return result
Next, we need a function to take a step in the search space.
The search space is defined by two variables (eta and alpha). A step in the search space must have some relationship to the previous values and must be bound to sensible values (e.g. between 0 and 1).
We will use a “step size” hyperparameter that controls how far the algorithm is allowed to move from the existing configuration. A new configuration will be chosen probabilistically using a Gaussian distribution with the current value as the mean of the distribution and the step size as the standard deviation of the distribution.
We can use the randn() NumPy function to generate random numbers with a Gaussian distribution.
The step() function below implements this and will take a step in the search space and generate a new configuration using an existing configuration.
# take a step in the search space
def step(cfg, step_size):
# unpack the configuration
eta, alpha = cfg
# step eta
new_eta = eta + randn() * step_size
# check the bounds of eta
if new_eta <= 0.0:
new_eta = 1e-8
# step alpha
new_alpha = alpha + randn() * step_size
# check the bounds of alpha
if new_alpha < 0.0:
new_alpha = 0.0
# return the new configuration
return [new_eta, new_alpha]
Next, we need to implement the stochastic hill climbing algorithm that will call our objective() function to evaluate candidate solutions and our step() function to take a step in the search space.
The search first generates a random initial solution, in this case with eta and alpha values in the range 0 and 1. The initial solution is then evaluated and is taken as the current best working solution.
...
# starting point for the search
solution = [rand(), rand()]
# evaluate the initial point
solution_eval = objective(X, y, solution)
Next, the algorithm iterates for a fixed number of iterations provided as a hyperparameter to the search. Each iteration involves taking a step and evaluating the new candidate solution.
...
# take a step
candidate = step(solution, step_size)
# evaluate candidate point
candidate_eval = objective(X, y, candidate)
If the new solution is better than the current working solution, it is taken as the new current working solution.
...
# check if we should keep the new point
if candidate_eval >= solution_eval:
# store the new point
solution, solution_eval = candidate, candidate_eval
# report progress
print('>%d, cfg=%s %.5f' % (i, solution, solution_eval))
At the end of the search, the best solution and its performance are then returned.
Tying this together, the hillclimbing() function below implements the stochastic hill climbing algorithm for tuning the Perceptron algorithm, taking the dataset, objective function, number of iterations, and step size as arguments.
# hill climbing local search algorithm
def hillclimbing(X, y, objective, n_iter, step_size):
# starting point for the search
solution = [rand(), rand()]
# evaluate the initial point
solution_eval = objective(X, y, solution)
# run the hill climb
for i in range(n_iter):
# take a step
candidate = step(solution, step_size)
# evaluate candidate point
candidate_eval = objective(X, y, candidate)
# check if we should keep the new point
if candidate_eval >= solution_eval:
# store the new point
solution, solution_eval = candidate, candidate_eval
# report progress
print('>%d, cfg=%s %.5f' % (i, solution, solution_eval))
return [solution, solution_eval]
We can then call the algorithm and report the results of the search.
In this case, we will run the algorithm for 100 iterations and use a step size of 0.1, chosen after a little trial and error.
...
# define the total iterations
n_iter = 100
# step size in the search space
step_size = 0.1
# perform the hill climbing search
cfg, score = hillclimbing(X, y, objective, n_iter, step_size)
print('Done!')
print('cfg=%s: Mean Accuracy: %f' % (cfg, score))
Tying this together, the complete example of manually tuning the Perceptron algorithm is listed below.
# manually search perceptron hyperparameters for binary classification
from numpy import mean
from numpy.random import randn
from numpy.random import rand
from sklearn.datasets import make_classification
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import RepeatedStratifiedKFold
from sklearn.linear_model import Perceptron
# objective function
def objective(X, y, cfg):
# unpack config
eta, alpha = cfg
# define model
model = Perceptron(penalty='elasticnet', alpha=alpha, eta0=eta)
# define evaluation procedure
cv = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=1)
# evaluate model
scores = cross_val_score(model, X, y, scoring='accuracy', cv=cv, n_jobs=-1)
# calculate mean accuracy
result = mean(scores)
return result
# take a step in the search space
def step(cfg, step_size):
# unpack the configuration
eta, alpha = cfg
# step eta
new_eta = eta + randn() * step_size
# check the bounds of eta
if new_eta <= 0.0:
new_eta = 1e-8
# step alpha
new_alpha = alpha + randn() * step_size
# check the bounds of alpha
if new_alpha < 0.0:
new_alpha = 0.0
# return the new configuration
return [new_eta, new_alpha]
# hill climbing local search algorithm
def hillclimbing(X, y, objective, n_iter, step_size):
# starting point for the search
solution = [rand(), rand()]
# evaluate the initial point
solution_eval = objective(X, y, solution)
# run the hill climb
for i in range(n_iter):
# take a step
candidate = step(solution, step_size)
# evaluate candidate point
candidate_eval = objective(X, y, candidate)
# check if we should keep the new point
if candidate_eval >= solution_eval:
# store the new point
solution, solution_eval = candidate, candidate_eval
# report progress
print('>%d, cfg=%s %.5f' % (i, solution, solution_eval))
return [solution, solution_eval]
# define dataset
X, y = make_classification(n_samples=1000, n_features=5, n_informative=2, n_redundant=1, random_state=1)
# define the total iterations
n_iter = 100
# step size in the search space
step_size = 0.1
# perform the hill climbing search
cfg, score = hillclimbing(X, y, objective, n_iter, step_size)
print('Done!')
print('cfg=%s: Mean Accuracy: %f' % (cfg, score))
Running the example reports the configuration and result each time an improvement is seen during the search. At the end of the run, the best configuration and result are reported.
Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.
In this case, we can see that the best result involved using a learning rate slightly above 1 at 1.004 and a regularization weight of about 0.002 achieving a mean accuracy of about 79.1 percent, better than the default configuration that achieved an accuracy of about 78.5 percent.
Can you get a better result?
Let me know in the comments below.
Now that we are familiar with how to use a stochastic hill climbing algorithm to tune the hyperparameters of a simple machine learning algorithm, let’s look at tuning a more advanced algorithm, such as XGBoost.
XGBoost Hyperparameter Optimization
XGBoost is short for Extreme Gradient Boosting and is an efficient implementation of the stochastic gradient boosting machine learning algorithm.
The stochastic gradient boosting algorithm, also called gradient boosting machines or tree boosting, is a powerful machine learning technique that performs well or even best on a wide range of challenging machine learning problems.
First, the XGBoost library must be installed.
You can install it using pip, as follows:
sudo pip install xgboost
Once installed, you can confirm that it was installed successfully and that you are using a modern version by running the following code:
Running the code, you should see the following version number or higher.
xgboost 1.0.1
Although the XGBoost library has its own Python API, we can use XGBoost models with the scikit-learn API via the XGBClassifier wrapper class.
An instance of the model can be instantiated and used just like any other scikit-learn class for model evaluation. For example:
...
# define model
model = XGBClassifier()
Before we tune the hyperparameters of XGBoost, we can establish a baseline in performance using the default hyperparameters.
We will use the same synthetic binary classification dataset from the previous section and the same test harness of repeated stratified k-fold cross-validation.
The complete example of evaluating the performance of XGBoost with default hyperparameters is listed below.
# xgboost with default hyperparameters for binary classification
from numpy import mean
from numpy import std
from sklearn.datasets import make_classification
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import RepeatedStratifiedKFold
from xgboost import XGBClassifier
# define dataset
X, y = make_classification(n_samples=1000, n_features=5, n_informative=2, n_redundant=1, random_state=1)
# define model
model = XGBClassifier()
# define evaluation procedure
cv = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=1)
# evaluate model
scores = cross_val_score(model, X, y, scoring='accuracy', cv=cv, n_jobs=-1)
# report result
print('Mean Accuracy: %.3f (%.3f)' % (mean(scores), std(scores)))
Running the example evaluates the model and reports the mean and standard deviation of the classification accuracy.
Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.
In this case, we can see that the model with default hyperparameters achieved a classification accuracy of about 84.9 percent.
We would hope that we can achieve better performance than this with optimized hyperparameters.
Mean Accuracy: 0.849 (0.040)
Next, we can adapt the stochastic hill climbing optimization algorithm to tune the hyperparameters of the XGBoost model.
There are many hyperparameters that we may want to optimize for the XGBoost model.
For an overview of how to tune the XGBoost model, see the tutorial:
We will focus on four key hyperparameters; they are:
Learning Rate (learning_rate)
Number of Trees (n_estimators)
Subsample Percentage (subsample)
Tree Depth (max_depth)
The learning rate controls the contribution of each tree to the ensemble. Sensible values are less than 1.0 and slightly above 0.0 (e.g. 1e-8).
The number of trees controls the size of the ensemble, and often, more trees is better to a point of diminishing returns. Sensible values are between 1 tree and hundreds or thousands of trees.
The subsample percentages define the random sample size used to train each tree, defined as a percentage of the size of the original dataset. Values are between a value slightly above 0.0 (e.g. 1e-8) and 1.0
The tree depth is the number of levels in each tree. Deeper trees are more specific to the training dataset and perhaps overfit. Shorter trees often generalize better. Sensible values are between 1 and 10 or 20.
First, we must update the objective() function to unpack the hyperparameters of the XGBoost model, configure it, and then evaluate the mean classification accuracy.
# objective function
def objective(X, y, cfg):
# unpack config
lrate, n_tree, subsam, depth = cfg
# define model
model = XGBClassifier(learning_rate=lrate, n_estimators=n_tree, subsample=subsam, max_depth=depth)
# define evaluation procedure
cv = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=1)
# evaluate model
scores = cross_val_score(model, X, y, scoring='accuracy', cv=cv, n_jobs=-1)
# calculate mean accuracy
result = mean(scores)
return result
Next, we need to define the step() function used to take a step in the search space.
Each hyperparameter is quite a different range, therefore, we will define the step size (standard deviation of the distribution) separately for each hyperparameter. We will also define the step sizes in line rather than as arguments to the function, to keep things simple.
The number of trees and the depth are integers, so the stepped values are rounded.
The step sizes chosen are arbitrary, chosen after a little trial and error.
The updated step function is listed below.
# take a step in the search space
def step(cfg):
# unpack config
lrate, n_tree, subsam, depth = cfg
# learning rate
lrate = lrate + randn() * 0.01
if lrate <= 0.0:
lrate = 1e-8
if lrate > 1:
lrate = 1.0
# number of trees
n_tree = round(n_tree + randn() * 50)
if n_tree <= 0.0:
n_tree = 1
# subsample percentage
subsam = subsam + randn() * 0.1
if subsam <= 0.0:
subsam = 1e-8
if subsam > 1:
subsam = 1.0
# max tree depth
depth = round(depth + randn() * 7)
if depth <= 1:
depth = 1
# return new config
return [lrate, n_tree, subsam, depth]
Finally, the hillclimbing() algorithm must be updated to define an initial solution with appropriate values.
In this case, we will define the initial solution with sensible defaults, matching the default hyperparameters, or close to them.
...
# starting point for the search
solution = step([0.1, 100, 1.0, 7])
Tying this together, the complete example of manually tuning the hyperparameters of the XGBoost algorithm using a stochastic hill climbing algorithm is listed below.
# xgboost manual hyperparameter optimization for binary classification
from numpy import mean
from numpy.random import randn
from numpy.random import rand
from numpy.random import randint
from sklearn.datasets import make_classification
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import RepeatedStratifiedKFold
from xgboost import XGBClassifier
# objective function
def objective(X, y, cfg):
# unpack config
lrate, n_tree, subsam, depth = cfg
# define model
model = XGBClassifier(learning_rate=lrate, n_estimators=n_tree, subsample=subsam, max_depth=depth)
# define evaluation procedure
cv = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=1)
# evaluate model
scores = cross_val_score(model, X, y, scoring='accuracy', cv=cv, n_jobs=-1)
# calculate mean accuracy
result = mean(scores)
return result
# take a step in the search space
def step(cfg):
# unpack config
lrate, n_tree, subsam, depth = cfg
# learning rate
lrate = lrate + randn() * 0.01
if lrate <= 0.0:
lrate = 1e-8
if lrate > 1:
lrate = 1.0
# number of trees
n_tree = round(n_tree + randn() * 50)
if n_tree <= 0.0:
n_tree = 1
# subsample percentage
subsam = subsam + randn() * 0.1
if subsam <= 0.0:
subsam = 1e-8
if subsam > 1:
subsam = 1.0
# max tree depth
depth = round(depth + randn() * 7)
if depth <= 1:
depth = 1
# return new config
return [lrate, n_tree, subsam, depth]
# hill climbing local search algorithm
def hillclimbing(X, y, objective, n_iter):
# starting point for the search
solution = step([0.1, 100, 1.0, 7])
# evaluate the initial point
solution_eval = objective(X, y, solution)
# run the hill climb
for i in range(n_iter):
# take a step
candidate = step(solution)
# evaluate candidate point
candidate_eval = objective(X, y, candidate)
# check if we should keep the new point
if candidate_eval >= solution_eval:
# store the new point
solution, solution_eval = candidate, candidate_eval
# report progress
print('>%d, cfg=[%s] %.5f' % (i, solution, solution_eval))
return [solution, solution_eval]
# define dataset
X, y = make_classification(n_samples=1000, n_features=5, n_informative=2, n_redundant=1, random_state=1)
# define the total iterations
n_iter = 200
# perform the hill climbing search
cfg, score = hillclimbing(X, y, objective, n_iter)
print('Done!')
print('cfg=[%s]: Mean Accuracy: %f' % (cfg, score))
Running the example reports the configuration and result each time an improvement is seen during the search. At the end of the run, the best configuration and result are reported.
Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.
In this case, we can see that the best result involved using a learning rate of about 0.02, 52 trees, a subsample rate of about 50 percent, and a large depth of 53 levels.
This configuration resulted in a mean accuracy of about 87.3 percent, better than the default configuration that achieved an accuracy of about 84.9 percent.
Can you get a better result?
Let me know in the comments below.
Einer der vielen großen Vorteile der Verwendung von C ++ für Anwendungen und Projekte ist der Zugriff auf die umfangreiche Bibliothek der verfügbaren C ++ – Bibliotheken und Frameworks. Grundsätzlich gibt es für alles eine C ++ – Bibliothek, und wenn nicht, gibt es definitiv eine C-Bibliothek dafür.
Früher war es im Allgemeinen eine Herausforderung, verschiedene Bibliotheken in C ++ – Projekte zu integrieren, da die Kompatibilität zwischen verschiedenen C ++ – Compilern unterschiedlich war. Ein mit GCC erstelltes Projekt hätte Probleme beim Kompilieren auf VC ++, und ein mit VC ++ erstelltes Projekt hätte Probleme beim Kompilieren auf BCC usw. Zum Glück haben wir seit jenen Tagen einen langen Weg zurückgelegt, und der C ++ – Compiler ist jetzt in erheblichem Maße kompatibel mit gegenseitig.
Die Verwendung von CLANG durch C ++ Builder ist keine Ausnahme. Während der klassische Compiler häufig Probleme mit der modernen C ++ – Syntax hat, ist der CLANG-Compiler einer der standardkonformsten verfügbaren C ++ – Compiler und eröffnet Ihren C ++ Builder-Projekten das umfangreiche Universum an C ++ – Bibliotheken.
Das heißt nicht, dass es trivial ist. Es gibt immer einige Tricks und Optimierungen, die Sie tun müssen, um eine Bibliothek in Ihren Projekten zu verwenden, aber im Vergleich zu dem, was vorher notwendig war, ist es kinderleicht.
In diesem Blog-Beitrag werden wir untersuchen, was erforderlich ist, um eine ziemlich häufige C ++ – Bibliothek, SQLiteCpp, zu erhalten, die in einem C ++ Builder-Projekt funktioniert.
Was ist SQLiteCpp?
SQLiteCpp ist ein C ++ – RAII-Wrapper für die SQLite-Datenbank-C-Bibliothek und bietet eine hervorragende C ++ – Schnittstelle zu dieser fast universellen tragbaren relationalen Datenbankbibliothek.
Sqlite wird in vielen verschiedenen Anwendungen verwendet, von eingebetteten Projekten bis hin zu Mainstream-Anwendungen, als benutzerfreundliche integrierte Datenbank zum Speichern, Abfragen und Abrufen von Daten vieler verschiedener Typen.
Wir werden SQLiteCpp verwenden, um eine einfache Anwendung zu erstellen, die einige Daten in einer einfachen Konsolenanwendung in C ++ Builder speichert und abruft.
Die Bibliothek erhalten
SQLiteCpp wird auf Github gehostet und das Repository enthält alle Dateien, die zum Kompilieren in Ihre Anwendung erforderlich sind.
1. Gehen Sie zu https://github.com/SRombauts/SQLiteCpp
2. Laden Sie die neueste Version herunter und extrahieren Sie sie in einen Ordner
Projekte einrichten
SQLiteCpp unterstützt das CMake-Build-System, sodass wir damit unsere Bibliotheken mit C ++ Builder erstellen können. Oft ist es jedoch interessanter und direkter, die Projekte einfach selbst zu erstellen. Dies hat den zusätzlichen Vorteil, dass Sie den Build an Ihre Verwendung anpassen können.
3. Erstellen Sie im Extraktordner ein Verzeichnis mit dem Namen cbuilder. Dies enthält unsere C ++ Builder-spezifischen Projektdateien. Die resultierende Verzeichnisstruktur sollte folgendermaßen aussehen:
4. Wir möchten diese Bibliothek als statische Bibliothek erstellen, die wir mit unserer C ++ Builder-Anwendung verknüpfen können. Gehen Sie also zu RAD Studio und erstellen Sie ein neues statisches Bibliotheksprojekt. Speichern Sie dieses Projekt als sqlitecpp.cbproj im Verzeichnis cbuilder.
5. Gehen Sie in die Projektoptionen und aktivieren Sie den CLANG-Compiler für alle Konfigurationen:
6. Gehen Sie zu den Einstellungen des Bibliothekars und stellen Sie die Seitengröße auf 64 ein (dies basiert auf Erfahrung. Wenn Sie die zu verwendende Seitengröße nicht kennen, teilt Ihnen der TLIB-Linker mit, ob die Seitengröße beim Erstellen angepasst werden muss das Projekt)
Auswahl der Quelldateien
Nachdem wir unser Projekt eingerichtet haben, müssen wir dem zu kompilierenden Projekt die erforderlichen Quelldateien hinzufügen. Die Art und Weise, zu bestimmen, welche Dateien eingeschlossen werden sollen, ist für jedes Projekt unterschiedlich, und manchmal muss etwas gegraben werden, um die richtigen Dateien zu ermitteln. CMakeLists.txt kann definitiv dabei helfen. Die folgenden Richtlinien sollten helfen:
Suchen Sie nach einem src-Verzeichnis. Dateien mit der Erweiterung .c, .cpp oder .cxx sind die Quelldateien
Ignorieren Sie Dateien, die eine main () -Methode enthalten. Dies sind im Allgemeinen Test-, Demo- oder Beispieldateien, die als eigenständige Anwendungen gedacht sind.
Für SQLiteCpp sind dies die Quelldateien:
sqlite3 / sqlite3.c
src / Transaction.cpp
src / Backup.cpp
src / Column.cpp
src / Database.cpp
src / Exception.cpp
src / Statement.cpp
7. Fügen Sie diese Dateien dem Bibliotheksprojekt hinzu.
Aufbau der Bibliothek
Wenn Sie jetzt versuchen würden, diese Bibliothek zu erstellen, würden Sie einige Fehler erhalten, die den folgenden ähnlich sind:
Wir müssen einige Projektoptionen aktualisieren und einige Include-Pfade festlegen.
8. Legen Sie den Include-Pfad fest. Möglicherweise haben Sie bemerkt, dass das Projekt über ein Include-Verzeichnis verfügt, das sich auf derselben Ebene wie die src-Dateien befindet. Fügen Sie diesen Ordner dem Projekteinschlusspfad hinzu, damit die Header gefunden werden können:
9. Erstellen Sie das Projekt. Es sollte erfolgreich abgeschlossen werden. Herzlichen Glückwunsch, Sie haben jetzt Ihre SQLiteCpp-Bibliothek.
Verwenden der Bibliothek
Nachdem wir unsere Bibliothek haben, können wir eine einfache Anwendung erstellen, um sie zu testen.
10. Erstellen Sie eine neue C ++ – Konsolenanwendung in derselben Projektgruppe. Wählen Sie die Visual Component Library als Framework für die Konsolenanwendung aus, damit diese Bibliothek mit der VCL funktioniert.
11. Speichern Sie dieses Projekt als test_sqlite.cbproj im Ordner cbuilder.
12. Gehen Sie in die Projektoptionen für dieses Projekt und fügen Sie ..include als Include-Pfad für dieses Projekt hinzu und aktivieren Sie den CLANG-Compiler, genau wie wir es für die SQLiteCpp-Bibliothek getan haben.
13. Fügen Sie in unserem Testprojekt die Include-Pfade oben in die C ++ – Datei ein:
Beachten Sie, dass wir das Linkverzeichnis #pragma verwendet haben, um dem Compiler mitzuteilen, dass die Bibliothek sqlitecpp.lib aus dem Bibliotheksprojekt verknüpft werden muss. Alternativ können Sie die .lib-Datei auch einfach zum Testprojekt hinzufügen. Die Verwendung des Links #pragma kann dies jedoch vereinfachen.
14. Fügen Sie nun Code hinzu, um eine Datenbank zu erstellen, fügen Sie einige Daten ein und lesen Sie sie erneut aus:
int _tmain(int argc, _TCHAR* argv[])
{
// Open a database file in create/write mode
SQLite::Database db("test.db3", SQLite::OPEN_READWRITE | SQLite::OPEN_CREATE); std::cout << "SQLite database file " << db.getFilename().c_str() << "n";
// Create a new table with an explicit "id" column aliasing the underlying rowid db.exec("DROP TABLE IF EXISTS test");
db.exec("CREATE TABLE test (id INTEGER PRIMARY KEY, value TEXT)");
// first row
db.exec("INSERT INTO test VALUES (NULL, "test")");
// second row
db.exec("INSERT INTO test VALUES (NULL, "second")");
// update the second row
db.exec("UPDATE test SET value="second-updated" WHERE id='2'");
// Check the results : expect two row of result
SQLite::Statement query(db, "SELECT * FROM test");
std::cout << "SELECT * FROM test :n";
while (query.executeStep())
{
std::cout << "row ("
<< query.getColumn(0) << ", ""
<< query.getColumn(1) << "")n";
}
getch();
return 0;
}
15. Führen Sie abschließend das Projekt aus, um es zu kompilieren und die Bibliothek in Betrieb zu sehen:
Abschließende Gedanken
Wie diese einfache Übung gezeigt hat, eröffnet C ++ Builder mit dem CLANG-Compiler eine Vielzahl von Möglichkeiten für die verschiedenen Bibliotheken und Frameworks, die in Ihre Projekte integriert werden können. Wir mussten keine einzige Codeänderung vornehmen, damit diese Open-Source-C ++ – Bibliothek kompiliert und in unseren C ++ Builder-Anwendungen funktioniert.
Es gibt Tausende anderer C ++ – Projekte, und ich kann Ihnen dringend empfehlen, mit deren Verwendung in Ihren C ++ – Projekten zu experimentieren!
Una de las muchas grandes ventajas de usar C ++ para aplicaciones y proyectos es el acceso que se tiene a la amplia biblioteca de bibliotecas y marcos de C ++ disponibles. Básicamente, hay una biblioteca C ++ para cualquier cosa, y si no la hay, definitivamente hay una biblioteca C para ello.
En el pasado, generalmente era un desafío integrar diferentes bibliotecas en proyectos de C ++ debido a las variaciones en la compatibilidad entre los diferentes compiladores de C ++. Un proyecto construido con GCC tendría problemas para compilar en VC ++, y un proyecto construido con VC ++ tendría problemas para compilar en BCC, etc. Afortunadamente, hemos recorrido un largo camino desde esos días y el compilador de C ++ ahora tiene un nivel sustancial de compatibilidad con mutuamente.
El uso de CLANG de C ++ Builder no es una excepción a esto. Si bien el compilador clásico a menudo tiene problemas con la sintaxis moderna de C ++, el compilador CLANG es uno de los compiladores de C ++ más compatibles con los estándares disponibles y, como tal, abre el vasto universo de bibliotecas de C ++ a sus proyectos de C ++ Builder.
Esto no quiere decir que sea trivial. Siempre hay algunos trucos y ajustes que uno debe hacer para usar cualquier biblioteca en sus proyectos, pero en comparación con lo que era necesario antes, es muy fácil.
En esta publicación de blog, exploraremos lo que se necesita para obtener una biblioteca de C ++ bastante común, SQLiteCpp, que funcione en un proyecto de C ++ Builder.
¿Qué es SQLiteCpp?
SQLiteCpp es un contenedor C ++ RAII alrededor de la biblioteca C de la base de datos sqlite, que proporciona una excelente interfaz C ++ para esta biblioteca de base de datos relacional portátil casi universal.
Sqlite se utiliza en muchas aplicaciones diferentes que van desde proyectos integrados hasta aplicaciones convencionales como una base de datos integrada fácil de usar para almacenar, consultar y recuperar datos de muchos tipos diferentes.
Usaremos SQLiteCpp para crear una aplicación simple que almacena y recupera algunos datos dentro de una aplicación de consola simple en C ++ Builder.
Obtener la biblioteca
SQLiteCpp está alojado en Github y el repositorio incluye todos los archivos necesarios para compilar en su aplicación.
1. Vaya a https://github.com/SRombauts/SQLiteCpp
2. Descargue la última versión y extráigala en una carpeta.
Configurar los proyectos
SQLiteCpp admite el sistema de compilación CMake, por lo que podríamos usarlo para compilar nuestras bibliotecas con C ++ Builder, pero a menudo es más interesante y directo crear los proyectos usted mismo. Esto tiene la ventaja adicional de que puede personalizar la construcción para que se adapte a su uso.
3. Cree un directorio dentro de la carpeta de extracción llamado cbuilder. Este contendrá nuestros archivos de proyecto específicos de C ++ Builder. La estructura de directorio resultante debería verse así:
4. Queremos construir esta biblioteca como una biblioteca estática que podamos vincular a nuestra aplicación C ++ Builder, así que vaya a RAD Studio y cree un nuevo proyecto de biblioteca estática. Guarde este proyecto como sqlitecpp.cbproj dentro del directorio cbuilder.
5. Vaya a las Opciones del proyecto y habilite el compilador CLANG para todas las configuraciones:
6. Vaya a la configuración de Bibliotecario y establezca el Tamaño de página en 64 (esto se basa en la experiencia; si no sabe el tamaño de página que debe usar, el enlazador TLIB le dirá si el tamaño de la página debe ajustarse cuando cree el proyecto)
Seleccionar los archivos de origen
Ahora que tenemos nuestro proyecto configurado, debemos agregar los archivos fuente necesarios al proyecto que se compilará. La forma de determinar qué archivos incluir difiere para cada proyecto y, a veces, es necesario investigar un poco para determinar los archivos correctos. CMakeLists.txt definitivamente puede ayudar con esto. Las siguientes pautas deberían ayudar:
Busque un directorio src, los archivos con una extensión .c, .cpp o .cxx serán los archivos de origen
Ignore los archivos que contienen un método main (). Por lo general, se trata de archivos de prueba, demostración o ejemplo que están destinados a ser aplicaciones independientes.
Para SQLiteCpp, estos son los archivos fuente:
sqlite3 / sqlite3.c
src / Transaction.cpp
src / Backup.cpp
src / Column.cpp
src / Database.cpp
src / Exception.cpp
src / Statement.cpp
7. Agregue estos archivos al proyecto de biblioteca.
Construyendo la Biblioteca
Si intentara construir esta biblioteca ahora mismo, obtendría algunos errores similares a los siguientes:
Necesitamos actualizar algunas opciones del proyecto y establecer algunas rutas de inclusión.
8. Establezca la ruta de inclusión. Es posible que haya notado que el proyecto tiene un directorio de inclusión que está al mismo nivel que los archivos src. Agregue esta carpeta a la ruta de inclusión del proyecto para que se puedan encontrar los encabezados:
9. Genere el proyecto. Debería completarse con éxito. Felicitaciones, ahora tiene su biblioteca SQLiteCpp.
Usando la biblioteca
Ahora que tenemos nuestra biblioteca, podemos crear una aplicación simple para probarla.
10. Cree una nueva aplicación de consola C ++ en el mismo grupo de proyectos. Elija Visual Component Library como marco para la aplicación de consola para que podamos ver esta biblioteca trabajando con la VCL.
11. Guarde este proyecto como test_sqlite.cbproj en la carpeta cbuilder.
12. Vaya a Opciones de proyecto para este proyecto y agregue ..include como ruta de inclusión para este proyecto y habilite el compilador CLANG, tal como hicimos para la biblioteca SQLiteCpp.
13. Agregue las rutas de inclusión en la parte superior del archivo C ++ en nuestro proyecto de prueba:
Tenga en cuenta que usamos el directorio de enlaces #pragma para decirle al compilador que necesitamos vincular la biblioteca sqlitecpp.lib desde el proyecto de la biblioteca. Alternativamente, puede agregar el archivo .lib al proyecto de prueba, pero usar el enlace #pragma puede simplificarlo.
14. Ahora agregue un código para crear una base de datos, inserte algunos datos y vuelva a leerlos:
int _tmain(int argc, _TCHAR* argv[])
{
// Open a database file in create/write mode
SQLite::Database db("test.db3", SQLite::OPEN_READWRITE | SQLite::OPEN_CREATE); std::cout << "SQLite database file " << db.getFilename().c_str() << "n";
// Create a new table with an explicit "id" column aliasing the underlying rowid db.exec("DROP TABLE IF EXISTS test");
db.exec("CREATE TABLE test (id INTEGER PRIMARY KEY, value TEXT)");
// first row
db.exec("INSERT INTO test VALUES (NULL, "test")");
// second row
db.exec("INSERT INTO test VALUES (NULL, "second")");
// update the second row
db.exec("UPDATE test SET value="second-updated" WHERE id='2'");
// Check the results : expect two row of result
SQLite::Statement query(db, "SELECT * FROM test");
std::cout << "SELECT * FROM test :n";
while (query.executeStep())
{
std::cout << "row ("
<< query.getColumn(0) << ", ""
<< query.getColumn(1) << "")n";
}
getch();
return 0;
}
15. Finalmente, ejecute el proyecto para compilarlo y vea la biblioteca en funcionamiento:
Pensamientos finales
Como ha demostrado este sencillo ejercicio, C ++ Builder con el compilador CLANG abre un mundo de posibilidades para las diferentes bibliotecas y marcos que pueden integrarse en sus proyectos. No tuvimos que hacer un solo cambio de código para que esta biblioteca C ++ de código abierto se compilara y funcionara en nuestras aplicaciones C ++ Builder.
Hay miles de otros proyectos de C ++ por ahí y puedo animarle a que experimente usándolos en sus proyectos de C ++.
Uma das muitas grandes vantagens de usar C ++ para aplicativos e projetos é o acesso que se tem à vasta biblioteca de bibliotecas e frameworks C ++ disponíveis. Basicamente, existe uma biblioteca C ++ para tudo e, se não houver, definitivamente existe uma biblioteca C para ela.
No passado, geralmente era um desafio integrar diferentes bibliotecas em projetos C ++ devido às variações de compatibilidade entre os diferentes compiladores C ++. Um projeto construído com GCC teria problemas para compilar em VC ++, e um projeto construído com VC ++ teria problemas para compilar em BCC, etc. Felizmente, já percorremos um longo caminho desde aqueles dias e o compilador C ++ agora tem um nível substancial de compatibilidade com uns aos outros.
O uso de CLANG pelo C ++ Builder não é exceção a isso. Embora o compilador clássico frequentemente tenha problemas com a sintaxe C ++ moderna, o compilador CLANG é um dos compiladores C ++ mais compatíveis com os padrões disponíveis e, como tal, abre o vasto universo de bibliotecas C ++ para seus projetos C ++ Builder.
Isso não quer dizer que seja trivial. Sempre há alguns truques e ajustes que você deve fazer para usar qualquer biblioteca em seus projetos, mas em comparação ao que era necessário antes, é muito fácil.
Nesta postagem do blog, vamos explorar o que é necessário para obter uma biblioteca C ++ bastante comum, SQLiteCpp, trabalhando em um projeto C ++ Builder.
O que é SQLiteCpp
SQLiteCpp é um wrapper C ++ RAII em torno da biblioteca C do banco de dados sqlite, fornecendo uma excelente interface C ++ para essa biblioteca de banco de dados relacional portátil quase universal.
O Sqlite é usado em muitos aplicativos diferentes, desde projetos incorporados a aplicativos convencionais, como um banco de dados integrado fácil de usar para armazenar, consultar e recuperar dados de muitos tipos diferentes.
Usaremos SQLiteCpp para criar um aplicativo simples que armazena e recupera alguns dados dentro de um aplicativo de console simples no C ++ Builder.
Obtendo a Biblioteca
SQLiteCpp está hospedado no Github e o repositório inclui todos os arquivos necessários para compilar em seu aplicativo.
1. Acesse https://github.com/SRombauts/SQLiteCpp
2. Baixe a versão mais recente e extraia-a em uma pasta
Configurando os Projetos
SQLiteCpp suporta o sistema de construção CMake, então poderíamos usar isso para construir nossas bibliotecas com o C ++ Builder, mas geralmente é mais interessante e direto apenas criar os projetos você mesmo. Isso tem a vantagem adicional de poder personalizar a construção para se adequar ao seu uso.
3. Crie um diretório dentro da pasta de extração chamado cbuilder. Isso conterá nossos arquivos de projeto específicos do C ++ Builder. A estrutura de diretório resultante deve ser semelhante a esta:
4. Queremos construir esta biblioteca como uma biblioteca estática que podemos vincular ao nosso aplicativo C ++ Builder, então vá para RAD Studio e crie um novo projeto de biblioteca estática. Salve este projeto como sqlitecpp.cbproj dentro do diretório cbuilder.
5. Vá para as opções do projeto e habilite o compilador CLANG para todas as configurações:
6. Vá para as configurações do Bibliotecário e defina o tamanho da página para 64 (isso é baseado na experiência – se você não souber o tamanho da página a ser usado, o linker TLIB lhe dirá se o tamanho da página precisa ser ajustado quando você construir o projeto)
Selecionando os arquivos de origem
Agora que configuramos nosso projeto, precisamos adicionar os arquivos-fonte necessários ao projeto a ser compilado. A maneira de determinar quais arquivos incluir difere para cada projeto e, às vezes, requer algumas pesquisas para determinar os arquivos certos. CMakeLists.txt pode definitivamente ajudar com isso. As seguintes diretrizes devem ajudar:
Procure um diretório src, os arquivos com extensão .c, .cpp ou .cxx serão os arquivos de origem
Ignore os arquivos que contêm um método main (). Geralmente, são arquivos de teste, de demonstração ou de exemplo que se destinam a ser aplicativos independentes.
Para SQLiteCpp, estes são os arquivos de origem:
sqlite3 / sqlite3.c
src / Transaction.cpp
src / Backup.cpp
src / Column.cpp
src / Database.cpp
src / Exception.cpp
src / Statement.cpp
7. Adicione esses arquivos ao projeto de biblioteca.
Construindo a Biblioteca
Se você tentasse construir esta biblioteca agora, obteria alguns erros semelhantes aos seguintes:
Precisamos atualizar algumas opções de projeto e definir alguns caminhos de inclusão.
8. Defina o caminho de inclusão. Você deve ter notado que o projeto tem um diretório de inclusão que está no mesmo nível dos arquivos src. Adicione esta pasta ao caminho de inclusão do projeto para que os cabeçalhos possam ser encontrados:
9. Construa o projeto. Deve ser concluído com sucesso. Parabéns, agora você tem sua biblioteca SQLiteCpp.
Usando a Biblioteca
Agora que temos nossa biblioteca, podemos criar um aplicativo simples para testá-la.
10. Crie um novo aplicativo de console C ++ no mesmo grupo de projetos. Escolha a Biblioteca de Componentes Visuais como estrutura para o aplicativo de console para que possamos ver esta biblioteca funcionando com a VCL.
11. Salve este projeto como test_sqlite.cbproj na pasta cbuilder.
12. Vá em Opções de projeto para este projeto e adicione ..include como um caminho de inclusão para este projeto e habilite o compilador CLANG, assim como fizemos para a biblioteca SQLiteCpp.
13. Adicione os caminhos de inclusão na parte superior do arquivo C ++ em nosso projeto de teste:
Observe que usamos o diretório de link #pragma para informar ao compilador que precisamos vincular a biblioteca sqlitecpp.lib do projeto de biblioteca. Como alternativa, você pode simplesmente adicionar o arquivo .lib ao projeto de teste, mas usar o link #pragma pode tornar isso mais simples.
14. Agora adicione algum código para criar um banco de dados, insira alguns dados e leia novamente:
int _tmain(int argc, _TCHAR* argv[])
{
// Open a database file in create/write mode
SQLite::Database db("test.db3", SQLite::OPEN_READWRITE | SQLite::OPEN_CREATE); std::cout << "SQLite database file " << db.getFilename().c_str() << "n";
// Create a new table with an explicit "id" column aliasing the underlying rowid db.exec("DROP TABLE IF EXISTS test");
db.exec("CREATE TABLE test (id INTEGER PRIMARY KEY, value TEXT)");
// first row
db.exec("INSERT INTO test VALUES (NULL, "test")");
// second row
db.exec("INSERT INTO test VALUES (NULL, "second")");
// update the second row
db.exec("UPDATE test SET value="second-updated" WHERE id='2'");
// Check the results : expect two row of result
SQLite::Statement query(db, "SELECT * FROM test");
std::cout << "SELECT * FROM test :n";
while (query.executeStep())
{
std::cout << "row ("
<< query.getColumn(0) << ", ""
<< query.getColumn(1) << "")n";
}
getch();
return 0;
}
15. Por fim, execute o projeto para compilá-lo e ver a biblioteca em operação:
Pensamentos finais
Como este exercício simples demonstrou, C ++ Builder com o compilador CLANG abre um mundo de possibilidades para as diferentes bibliotecas e estruturas que podem ser integradas em seus projetos. Não precisamos fazer uma única alteração no código para que essa biblioteca C ++ de código aberto compile e funcione em nossos aplicativos C ++ Builder.
Existem milhares de outros projetos C ++ por aí e eu posso encorajá-lo fortemente a experimentá-los em seus projetos C ++!
Одним из многих больших преимуществ использования C ++ для приложений и проектов является доступ к обширной библиотеке доступных библиотек и фреймворков C ++. По сути, существует библиотека C ++ для чего угодно, а если ее нет, то для нее определенно есть библиотека C.
В свое время интеграция разных библиотек в проекты C ++ была сложной задачей из-за различий в совместимости между различными компиляторами C ++. Проект, созданный с помощью GCC, будет иметь проблемы с компиляцией на VC ++, а проект, созданный с помощью VC ++, будет иметь проблемы с компиляцией на BCC и т. Д. К счастью, с тех пор мы прошли долгий путь, и компилятор C ++ теперь имеет значительный уровень совместимости с друг друга.
Использование CLANG в C ++ Builder не является исключением. В то время как классический компилятор часто имеет проблемы с современным синтаксисом C ++, компилятор CLANG является одним из наиболее совместимых со стандартами компиляторов C ++ и, как таковой, открывает обширный мир библиотек C ++ для ваших проектов C ++ Builder.
Это не значит, что это тривиально. Всегда есть некоторые уловки и хитрости, которые необходимо сделать, чтобы использовать любую библиотеку в ваших проектах, но по сравнению с тем, что было необходимо раньше, это очень просто.
В этом сообщении блога мы рассмотрим, что нужно, чтобы получить довольно распространенную библиотеку C ++, SQLiteCpp, работающую в проекте C ++ Builder.
Что такое SQLiteCpp
SQLiteCpp — это оболочка C ++ RAII для библиотеки C базы данных sqlite, обеспечивающая отличный интерфейс C ++ для этой почти универсальной переносимой библиотеки реляционных баз данных.
Sqlite используется во многих различных приложениях, начиная от встроенных проектов и заканчивая основными приложениями, как простая в использовании интегрированная база данных для хранения, запроса и извлечения данных самых разных типов.
Мы будем использовать SQLiteCpp для создания простого приложения, которое хранит и извлекает некоторые данные внутри простого консольного приложения в C ++ Builder.
Получение библиотеки
SQLiteCpp размещен на Github, и репозиторий включает все файлы, необходимые для компиляции в ваше приложение.
1. Перейдите на https://github.com/SRombauts/SQLiteCpp.
2. Загрузите последнюю версию и распакуйте ее в папку.
Настройка проектов
SQLiteCpp поддерживает систему сборки CMake, поэтому мы могли бы использовать ее для создания наших библиотек с помощью C ++ Builder, но часто более интересным и прямым является создание проектов самостоятельно. Это дает дополнительное преимущество, заключающееся в том, что вы можете настроить сборку в соответствии с вашими потребностями.
3. Создайте в папке извлечения каталог под названием cbuilder. Он будет содержать файлы нашего проекта для C ++ Builder. Результирующая структура каталогов должна выглядеть так:
4. Мы хотим создать эту библиотеку как статическую библиотеку, которую мы можем связать с нашим приложением C ++ Builder, поэтому перейдите в RAD Studio и создайте новый проект статической библиотеки. Сохраните этот проект как sqlitecpp.cbproj в каталоге cbuilder.
5. Перейдите в Параметры проекта и включите компилятор CLANG для всех конфигураций:
6. Перейдите в настройки библиотекаря и установите размер страницы на 64 (это основано на опыте — если вы не знаете, какой размер страницы использовать, компоновщик TLIB сообщит вам, нужно ли изменять размер страницы при построении. проект)
Выбор исходных файлов
Теперь, когда у нас настроен наш проект, нам нужно добавить необходимые исходные файлы в проект, который будет скомпилирован. Способ определения файлов, которые нужно включить, различается для каждого проекта, и иногда требуется некоторое время, чтобы определить правильные файлы. CMakeLists.txt определенно может помочь в этом. Следующие рекомендации должны помочь:
Найдите каталог src, файлы с расширением .c, .cpp или .cxx будут исходными файлами.
Игнорировать файлы, содержащие метод main (). Обычно это тестовые, демонстрационные файлы или файлы примеров, предназначенные для автономных приложений.
Для SQLiteCpp это исходные файлы:
sqlite3 / sqlite3.c
src / Transaction.cpp
src / Backup.cpp
src / Column.cpp
src / Database.cpp
src / Exception.cpp
src / Statement.cpp
7. Добавьте эти файлы в проект библиотеки.
Создание библиотеки
Если бы вы попытались собрать эту библиотеку прямо сейчас, вы бы получили следующие ошибки:
Нам нужно обновить некоторые параметры проекта и установить некоторые пути включения.
8. Задайте путь включения. Вы могли заметить, что в проекте есть подключаемый каталог, который находится на том же уровне, что и файлы src. Добавьте эту папку в путь включения проекта, чтобы можно было найти заголовки:
9. Постройте проект. Он должен успешно завершиться. Поздравляем, теперь у вас есть библиотека SQLiteCpp.
Использование библиотеки
Теперь, когда у нас есть библиотека, мы можем создать простое приложение для ее тестирования.
10. Создайте новое консольное приложение C ++ в той же группе проектов. Выберите библиотеку визуальных компонентов в качестве основы для консольного приложения, чтобы мы могли увидеть, как эта библиотека работает с VCL.
11. Сохраните этот проект как test_sqlite.cbproj в папке cbuilder.
12. Перейдите в Параметры проекта для этого проекта и добавьте ..include в качестве пути включения для этого проекта и включите компилятор CLANG, как мы это сделали для библиотеки SQLiteCpp.
13. Добавьте пути включения в начало файла C ++ в нашем тестовом проекте:
Обратите внимание, что мы использовали каталог ссылок #pragma, чтобы сообщить компилятору, что нам нужно связать библиотеку sqlitecpp.lib из проекта библиотеки. В качестве альтернативы вы можете просто добавить файл .lib в тестовый проект, но использование ссылки #pragma может упростить эту задачу.
14. Теперь добавьте код для создания базы данных, вставьте данные и прочтите их снова:
int _tmain(int argc, _TCHAR* argv[])
{
// Open a database file in create/write mode
SQLite::Database db("test.db3", SQLite::OPEN_READWRITE | SQLite::OPEN_CREATE); std::cout << "SQLite database file " << db.getFilename().c_str() << "n";
// Create a new table with an explicit "id" column aliasing the underlying rowid db.exec("DROP TABLE IF EXISTS test");
db.exec("CREATE TABLE test (id INTEGER PRIMARY KEY, value TEXT)");
// first row
db.exec("INSERT INTO test VALUES (NULL, "test")");
// second row
db.exec("INSERT INTO test VALUES (NULL, "second")");
// update the second row
db.exec("UPDATE test SET value="second-updated" WHERE id='2'");
// Check the results : expect two row of result
SQLite::Statement query(db, "SELECT * FROM test");
std::cout << "SELECT * FROM test :n";
while (query.executeStep())
{
std::cout << "row ("
<< query.getColumn(0) << ", ""
<< query.getColumn(1) << "")n";
}
getch();
return 0;
}
15. Наконец, запустите проект, чтобы скомпилировать его и посмотрите, как работает библиотека:
Последние мысли
Как показано в этом простом упражнении, C ++ Builder с компилятором CLANG открывает мир возможностей для различных библиотек и фреймворков, которые можно интегрировать в ваши проекты. Нам не нужно было вносить ни единого изменения кода, чтобы эта библиотека C ++ с открытым исходным кодом могла компилироваться и работать в наших приложениях C ++ Builder.
Существуют тысячи других проектов на C ++, и я настоятельно рекомендую вам поэкспериментировать с их использованием в своих проектах на C ++!
Developers may often need to retrieve the windows event messages to diagnose system problems and predict future issues. How to retrieve event logs programmatically for the given source such as System, Security, Hardware events, etc. ? Don’t know how to do? Don’t worry? MiTec’s System Information Management Suite’s component helps to retrieve event messages quickly and with less code. we will learn how to use use the TMiTec_EventLog, component in this blog post.
Platforms: Windows.
Installation Steps:
You can easily install this Component Suite from GetIt Package Manager. The steps are as follows.
Navigate In RAD Studio IDE->Tools->GetIt Package Manager->select Components in Categories->Components->Trail -MiTec system Information Component Suite 14.3 and click Install Button.
Read the license and Click Agree All. An Information dialog saying ‘Requires a restart of RAD studio at the end of the process. Do you want to proceed? click yes and continue.
It will download the plugin and installs it. Once installed Click Restart now.
How to run the Demo app:
Navigate to the System Information Management Suite trails setup, Demos folder which is installed during Get It installation e.g) C:UsersDocumentsEmbarcaderoStudio21.0CatalogRepositoryMiTeC-14.3DemosDelphi12
Open the ELView project in RAD studio 10.4.1 Compile and Run the application.
This Demo App shows how to retrieve event logs programmatically for the given source and details of the particular event message.
Components used in MSIC ELView Demo App:
MiTeC_EventLog Properties
TMiTeC_EventLog: Retrieves Windows Event Log messages for given source.
TComboBox to list the Event Source category such as System, Security, Hardware events.
TEdit to provide the filter text which helps to filter user preferred the event log messages
TListView to list the event log messages for a particular source.
TButton’s to save and refresh.
Implementation Details:
An instance is created EL of TMiTeC_EventLog, and source event source containers is retrieved by looping the ContainerCount property. Use OnReadEventLog to update the application message caption for each 1000 event messages.
SourceFilter property helps filter the text within the event log messages. Set this property with TEdit Text value.
On changing the combo box, list the event logs by looping the RecordCount, For each record of TLogRecord type provides the EventType, DateTime, Source, Category, EventID, Username, Domain, Computer, Description, BinaryData, CharData values.
You can provide the Username, Password, DomainName for connecting to remote machine for the new WinEvt API.
procedure TForm1.cbChange(Sender: TObject);
var
i: Integer;
h: Boolean;
begin
Memo.Lines.Clear;
if cb.ItemIndex=-1 then
Exit;
bAction.Caption:='Cancel';
bAction.OnClick:=cmCancel;
bLoad.Enabled:=False;
bSave.Enabled:=False;
cb.Enabled:=False;
eFilter.Enabled:=False;
lv.Enabled:=False;
Memo.Enabled:=False;
FCancel:=False;
Screen.Cursor:=crHourglass;
try
et:=GetTickCount64;
EL.SourceFilter:=eFilter.Text;
EL.SourceName:=cb.Text;
h:=True;
FCancel:=False;
Caption:=Format('EventLog Viewer - %d records / %1.2f s',[EL.RecordCount,(GetTickCount64-et)/1000]);
with lv.Items do begin
BeginUpdate;
try
Clear;
Update;
for i:=0 to EL.RecordCount-1 do
with Add do begin
Caption:=DatetimeToStr(EL.Records[i].DateTime);
SubItems.Add(EL.Records[i].Source);
SubItems.Add(IntToStr(EL.Records[i].EventID));
SubItems.Add(EL.Records[i].Category);
SubItems.Add(EL.Records[i].Computer);
SubItems.Add(EL.Records[i].Description);
ImageIndex:=Integer(EL.Records[i].EventType);
end;
finally
EndUpdate;
end;
end;
finally
EL.Clear;
bAction.Caption:='Refresh';
bAction.OnClick:=cmRefresh;
bLoad.Enabled:=True;
bSave.Enabled:=True;
cb.Enabled:=True;
eFilter.Enabled:=True;
lv.Enabled:=True;
Memo.Enabled:=True;
Screen.Cursor:=crDefault;
end;
lv.SetFocus;
end;
Show the selected items subitem in the Memo text.
procedure TForm1.lvSelectItem(Sender: TObject; Item: TListItem;
Selected: Boolean);
begin
Memo.Lines.Text:=Item.SubItems[Item.SubItems.Count-1];
end;
MiTeC EventLog Demo
It’s really that simple to retrieve event logs and its event log message details from various event source in your application. Use this MiTeC component suite and get the job done quickly.
In this article we'll be taking a look at how to read and write CSV files in Kotlin, specifically, using Apache Commons.
Apache Commons Dependency
Since we're working wih an external library, let's go ahead and import it into our Kotlin project. If you're using Maven, simply include the commons-csv dependency:
Also, since we'll be reading these records into custom objects, let's make a data class:
data class Student (
val studentId: Int,
val firstName: String,
val lastName: String,
val score: Int
)
Reading a CSV File in Kotlin
Let's first read this file using a BufferedReader, which accepts a Path to the resource we'd like to read:
val bufferedReader = new BufferedReader(Paths.get("/resources/students.csv"));
Then, once we've read the file into the buffer, we can use the buffer itself to initialize a CSVParser instance:
val csvParser = CSVParser(bufferedReader, CSVFormat.DEFAULT);
Given how volatile the CSV format can be - to remove the guesswork, you'll have to specify the CSVFormat when initializing the parser. This parser, initialized this way, can only then be used for this CSV format.
Since we're following the textbook example of the CSV format, and we're using the default separator, a comma (,) - we'll pass in CSVFormat.DEFAULT as the second argument.
Now, the CSVParser is an Iterable, that contains CSVRecord instances. Each line is a CSV record. Naturally, we can then iterate over the csvParser instance and extract records from it:
for (csvRecord in csvParser) {
val studentId = csvRecord.get(0);
val studentName = csvRecord.get(1);
val studentLastName = csvRecord.get(2);
var studentScore = csvRecord.get(3);
println(Student(studentId, studentName, studentLastName, studentScore));
}
For each CSVRecord, you can get its respective cells using the get() method, and passing in the index of the cell, starting at 0. Then, we can simply use these in the constructor of our Student data class.
Though, this approach isn't great. We need to know the order of the columns, as well as how many columns there are to use the get() method, and changing anything in the CSV file's structure totally breaks our code.
Reading a CSV File with Headers in Kotlin
It's reasonable to know what columns exist, but a little less so in which order they're in.
Usually, CSV files have a header line that specifies the names of the columns, such as StudentID, FirstName, etc. When constructing the CSVParser instance, following the Builder Design Pattern, we can specify whether the file we're reading has a header row or not, in the CSVFormat.
By default, the CSVFormat assumes that the file doesn't have a header. Let's first add a header row to our CSV file:
Now, let's initialize the CSVParser instance, and set a couple of optional options in the CSVFormat along the way:
val bufferedReader = new BufferedReader(Paths.get("/resources/students.csv"));
val csvParser = CSVParser(bufferedReader, CSVFormat.DEFAULT
.withFirstRecordAsHeader()
.withIgnoreHeaderCase()
.withTrim());
This way, the first record (row) in the file will be treated as the header row, and the values in that row will be used as the column names.
We've also specified that the header case doesn't mean much to us, turning the format into a case-insensitive one.
Finally, we've also told the parser to trim the records, which removes redundant whitespaces from the starts and ends of values if there are any. Some of the other options that you can fiddle around with are options such as:
These are used if you'd like to change the default behavior, such as set a new delimiter, specify how to treat quotes since they can oftentimes break the parsing logic and specify the record separator, present at the end of each record.
Finally, once we've loaded the file in and parsed it with these settings, you can retrieve CSVRecords as previously seen:
for (csvRecord in csvParser) {
val studentId = csvRecord.get("StudentId");
val studentName = csvRecord.get("FirstName);
val studentLastName = csvRecord.get("LastName);
var studentScore = csvRecord.get("Score);
println(Student(studentId, studentName, studentLastName, studentScore));
}
This is a much more forgiving approach, since we don't need to know the order of the columns themselves. Even if they get changed at any given time, the CSVParser's got us covered.
Similar to reading files, we can also write CSV files using Apache Commons. This time around, we'll be using the CSVPrinter.
Just how the CSVReader accepts a BufferedReader, the CSVPrinter accepts a BufferedWriter, and the CSVFormat we'd like it to use while writing the file.
Let's create a BufferedWriter, and instantiate a CSVPrinter instance:
val writer = new BufferedWriter(Paths.get("/resources/students.csv"));
val csvPrinter = CSVPrinter(writer, CSVFormat.DEFAULT
.withHeader("StudentID", "FirstName", "LastName", "Score"));
The printRecord() method, of the CSVPrinter instance is used to write out records. It accepts all the values for that record and prints it out in a new line. Calling the method over and over allows us to write many records. You can either specify each value in a list, or simply pass in a list of data.
There's no need to use the printRecord() method for the header row itself, since we've already specified it with the withHeader() method of the CSVFormat. Without specifying the header there, we would've had to print out the first row manually.
Don't forget to flush() and close() the printer after use.
Since we're working with a list of students here, and we can't just print the record like this, we'll loop through the student list, put their info into a new list and print that list of data using the printRecord() method:
val students = listOf(
Student(101, "John", "Smith", 90),
Student(203, "Mary", "Jane", 88),
Student(309, "John", "Wayne", 96)
);
for (student in students) {
val studentData = Arrays.asList(
student.studentId,
student.firstName,
student.lastName,
student.score)
csvPrinter.printRecord(studentData);
}
csvPrinter.flush();
csvPrinter.close();
RAD Studio 10.4.2 was planned as a feature-focused followup to the quality-focused release of 10.4.1. However, besides delivering some major features we also fixed more issues in 10.4.2 than in the previous release!
This applies as much to Code Insight, or DelphiLSP, as to other parts of Delphi 10.4.2. Let’s have a look at what’s new. First, the features…
Error Insight — now Error, Warning and Hint Insight
For many years you’ve been able to see code errors detected ahead of time before you compile, shown via a red zigzag underline in the code editor (a ‘red squiggly’.) One of the great improvements we made with the introduction of DelphiLSP in 10.4 was to make sure these indications were always correct: there is a 1:1 correlation between the marker in the code editor and the compiler errors you’d see if you compiled the code, and all errors shown in the editor and Structure pane are correct.
In 10.4.2 we’ve extended this so you can see warnings and hints in the code editor too. Warnings and hints provide valuable information about your code, issues that won’t prevent compilation but may prevent your app running the way you want. Showing these live in the editor as you type gives you much faster feedback and turnaround to fix issues in your code. And for those who prefer to compile without any warnings or hints – a great goal – seeing them inline will be invaluable.
A hint and warning visible in the code editor
In 10.4.2, we didn’t enable this by default, so that the code editor would not be covered in multiple colours for those whose code has many warnings and hints. After initial customer feedback we may turn it on by default in 10.5! But for this release, you can turn it on in the IDE Options > User Interface > Editor > Language page, ‘Error Insight’ tab, ‘Error Insight Display’ combo box:
You can control what Error Insight levels are displayed and what other Error Insight UI is shown
This tab lets you choose between seeing: errors only; errors and warnings; or errors, warnings and hints. We recommend you turn on showing all three.
Editor Rendering and Other Naming Problems
‘Error Insight’ is a great name – except now it could really be called Error, Warning and Hint Insight. (No, we haven’t changed how we refer to the feature.)
Another great name was ‘red squigglies’… except that’s now ‘red, amber and blue squigglies.’ But that’s not all. Now they may not even be squiggles at all! The Department Of Naming Things Real Good here at Embarcadero is quite unhappy with all the new features we’re providing you with this release. Look at this:
In 10.4.2, we want to ensure the code editor markers are clear and easy to see, plus we know our customers often like to customise the IDE to their own preferences. For those reasons we have four different ways to render the underline: the traditional zigzag, but also a curved wave (like other IDEs), a line of dots (my personal favourite, since I think it’s understated and elegant but still clear, and insist I am not overthinking an analysis of a few pixels at all), and a solid underbar. We hope you’ll enjoy configuring this, and especially if you have a high-resolution monitor or eyesight issues that you’ll find the marker style- that suits your needs.
We also show an icon in the editor gutter. This makes it easy to spot errors, warnings or hints when scrolling fast. Like the other changes here, this can be controlled or completely turned off if you wish.
Insight in the Editor Status Bar and Tooltips
If you have enough horizontal room, the status bar at the bottom of the code editor will now give you an overview of the number of errors, warnings and hints in the current unit.
If you mouse over an error (or warning or hint) we’ve also tweaked how this is displayed.
LSP Server Activity
Have you ever wondered what the Code Insight engine is doing, what it’s processing, and when it might be ready to give results? In 10.4.2, a small bar at the bottom of the Projects view lists the LSP server’s activity.
Inherited
In March 2015, over a year before I joined Embarcadero and at a time when I had no idea I might work here one day, let alone be responsible for this part of Delphi, I entered the Quality Portal feature request RSP-10217. It’s a popular QP report with 117 votes and 41 watchers. The request was to extend Ctrl+Click, which navigates to the declaration of a symbol, to allow you to Ctrl+Click on the ‘inherited’ keyword.
I am very happy to say that in 10.4.2 this feature is implemented. You can Ctrl+Click on the ‘inherited’ keyword and, if qualified with a method such as ‘inherited Create’, also Ctrl+Click on the method name, which will also be understood as navigating to that inherited method.
Ctrl+Click on the ‘inherited’ keyword
Why is this such a useful addition? Navigating to where something is defined is very helpful for learning about it and finding out what it does, and it’s why Ctrl+Click in general is useful. But the functionality used to work only on symbol names. When you invoke an inherited method, or in other words invoke the implementation in an ancestor class, that too is something you want to be able to navigate to to find out what it does: in fact this is highly useful because moving around within an inheritance hierarchy is important for understanding your object-oriented code. There used to be no way to find the inherited method. Now there is!
In a final touch, code completing after the ‘inherited’ keyword will now only list methods from ancestor classes.
… and Quality!
All the above are new features, sometimes really neat new features. But as I mentioned at the start of this post, 10.4.2 was a big quality release as well. For DelphiLSP, sometimes this has meant fixing bugs. But it’s also meant revising features – tweaking, adjusting, ensuring they work in less common scenarios, changing behaviour based on feedback, and more. Here’s a list of just some of the fixes, tweaks, changes, adjustments and polish we’ve added to DelphiLSP this release.
Code completion functions in IFDEF blocks for inbuilt macros which the compiler has defined in some situations, such as UNICODE or MSWINDOWS
Many improvements to which units are showing when code completing in the uses clause (it will also show .pas and .dcu files in the search and project paths; you can disable DCUs if you need in the Project Options on a per-platform level); plus a unit ‘stem’ (like ‘Winapi’ in ‘Winapi.Windows’) is also listed; it even indicates to you when a unit you’re completing is already in the uses clause!
Many improvements to overload resolution, which will be visible when Ctrl+Click-ing an overloaded method, or displaying Parameter Insight when there are multiple overloads for a method
Ctrl+Click-ing on a method implementation will go to its declaration, and vice versa. Ctrl+Click navigation also works for calls to instantiated generic methods, in many cases on symbols in incorrect (uncompilable) code; and on the Exit inbuilt’s argument; plus improvements using it in a uses clause
Many improvements for generics, including completing in generic classes showing strict private/protected symbols; find declaration finding fields and properties in generic types; finding generic methods in another unit; and more
Many improvements completing and navigating to: attributes; scoped enums (they will display and complete the enumeration with their scope); listing resource strings; navigating to properties and property getter/setters; and more
Documentation display improvements, including showing XMLDoc during parameter completion
Many performance tweaks. Even the executable is a smaller size now.
And that’s not it – there are many, many more tweaks, changes, and quality fixes throughout all of DelphiLSP. The above is perhaps a quarter of the list, and you’ll notice many dot points cover multiple items. Each one has other items – there are more tweaks to handling .pas and .dcu lookup, for example, which aren’t mentioned, or more tweaks around parameter completion, or tweaks around how the IDE inserts text, or…
The impression I’d like to communicate is just how much has been revised and improved in 10.4.2. Many of the above items you might not notice: they are subtle improvements. The general sense is that code completion and related features just work when you expect them to, as you expect them to.
Overview
Not only does Code Insight in Delphi and RAD Studio 10.4.2 come with some really useful new features, including commonly requested ones — warnings and hints in the editor! Ctrl+click on ‘inherited’! See what the LSP server is doing! — the entire feature has many quality revisions. The feedback we’ve got so far has been very favourable, and we highly recommend you install 10.4.2 as soon as you can.
One of the many big advantages to using C++ for applications and projects is the access one has to the vast library of C++ libraries and frameworks available. Basically, there is a C++ library for anything, and if there isn’t, there is definitely a C library for it.
Back in the day, it was generally a challenge to integrate different libraries into C++ projects due to the variances in compatibility between different C++ compilers. A project built with GCC would have trouble compiling on VC++, and a project built with VC++ would have trouble compiling on BCC, etc. Thankfully, we’ve come a long way since those days and C++ compiler now have a substantial level of compatibility with each other.
C++Builder’s use of CLANG is no exception to this. While the classic compiler often has issues with modern C++ syntax, the CLANG compiler is one of the most standards compliant C++ compilers available, and as such, opens up the vast universe of C++ libraries to your C++Builder projects.
This doesn’t mean to say that it’s trivial. There are always some tricks and tweaks one must do to use any library in your projects, but in comparison to what was necessary before, it’s dead easy.
In this blog post, we’ll explore what it takes to get a fairly common C++ library, SQLiteCpp, working in a C++Builder project.
What is SQLiteCpp
SQLiteCpp is a C++ RAII wrapper around the sqlite database C library, providing an excellent C++ interface to this almost universal portable relational database library.
Sqlite is used in many different applications ranging from embedded projects to mainstream applications as an easy-to use integrated database for storing, querying and retrieving data of many different types.
We’ll use SQLiteCpp to create a simple application that stores and retrieves some data inside a simple console application in C++Builder.
Getting the Library
SQLiteCpp is hosted on Github and the repository includes all the files necessary to compile into your application.
1. Go to https://github.com/SRombauts/SQLiteCpp
2. Download the latest release and extract it into a folder
Setting up the Projects
SQLiteCpp supports the CMake build system, so we could use that to build our libraries with C++Builder, but it’s often more interesting and direct to just create the projects yourself. This has the added advantage that you can customize the build to suit your use.
3. Create a directory inside the extract folder called cbuilder. This will contain our C++Builder-specific project files. The resulting directory structure should look like this:
4. We want to build this library as a static library that we can link into our C++Builder application, so go to RAD Studio and create a new static library project. Save this project as sqlitecpp.cbproj inside the cbuilder directory.
5. Go into the Project Options and enable the CLANG compiler for all configurations:
6. Go to the Librarian settings and set the Page Size to 64 (this is based on experience – if you don’t know the page size to use, the TLIB linker will tell you whether the page size needs to be adjusted when you build the project)
Selecting the Source Files
Now that we have our project set up, we need to add the necessary source files into the project to be compiled. The way to determine which files to include differs for each project, and it sometimes requires some digging to determine the right files. CMakeLists.txt can definitely help with this. The following guidelines should help:
Look for a src directory, files with a .c, .cpp or .cxx extension will be the source files
Ignore files that contain a main() method. These are generally test, demo or example files that are intended to be standalone applications.
For SQLiteCpp, these are the source files:
sqlite3/sqlite3.c
src/Transaction.cpp
src/Backup.cpp
src/Column.cpp
src/Database.cpp
src/Exception.cpp
src/Statement.cpp
7. Add these files to the library project.
Building the Library
If you were to try to build this library right now, you’d get some errors similar to the following:
We need to update some project options and set some include paths.
8. Set the include path. You may have noticed that the project has an include directory that’s at the same level as the src files. Add this folder to the project include path so that the headers can be found:
9. Build the project. It should complete successfully. Congratulations, you now have your SQLiteCpp library.
Using the Library
Now that we have our library, we can create a simple application to test it out.
10. Create a new C++ console application in the same project group. Choose the Visual Component Library as the framework for the console application so we can see this library working with the VCL.
11. Save this project as test_sqlite.cbproj in the cbuilder folder.
12. Go into the Project Options for this project and add ..include as an include path for this project and enable the CLANG compiler, just like we did for the SQLiteCpp library.
13. Add in the include paths into the top of the C++ file in our test project:
Note that we used the #pragma link directory to tell the compiler that we need to link the sqlitecpp.lib library from the library project. You can alternatively just add the .lib file to the test project, but using #pragma link can make this simpler.
14. Now add some code to create a database, insert some data and read it back out again:
int _tmain(int argc, _TCHAR* argv[])
{
// Open a database file in create/write mode
SQLite::Database db("test.db3", SQLite::OPEN_READWRITE | SQLite::OPEN_CREATE); std::cout << "SQLite database file " << db.getFilename().c_str() << "n";
// Create a new table with an explicit "id" column aliasing the underlying rowid db.exec("DROP TABLE IF EXISTS test");
db.exec("CREATE TABLE test (id INTEGER PRIMARY KEY, value TEXT)");
// first row
db.exec("INSERT INTO test VALUES (NULL, "test")");
// second row
db.exec("INSERT INTO test VALUES (NULL, "second")");
// update the second row
db.exec("UPDATE test SET value="second-updated" WHERE id='2'");
// Check the results : expect two row of result
SQLite::Statement query(db, "SELECT * FROM test");
std::cout << "SELECT * FROM test :n";
while (query.executeStep())
{
std::cout << "row ("
<< query.getColumn(0) << ", ""
<< query.getColumn(1) << "")n";
}
getch();
return 0;
}
15. Finally, run the project to compile it and see the library in operation:
Final Thoughts
As this simple exercise has demonstrated, C++Builder with the CLANG compiler opens up a world of possibilities for the different libraries and frameworks that can be integrated into your projects. We didn’t have to make a single code change to get this open-source C++ library to compile and work in our C++Builder applications.
There are thousands of other C++ projects out there and I can strongly encourage you to experiment with using them in your C++ projects!
Embarcaderoでは、GetItパッチメカニズムを使用して、RAD Studio 10.4の旧リリースのお客様に10.4.2が利用可能であることを通知し、それをインストールする簡単な方法を提供しています。
RAD Studio 10.4以降、IDEにはGetItを介してパッチを配信する仕組みがあり、ウェルカムページでパッチが利用可能であることをユーザーに警告するオプションが用意されています。新しいリリースについて警告する同様のメカニズムがないため、10.4.2リリースの配信にもパッチを使用することを試みました。
If you need advanced file and streaming compression for your application, the IPWorks Zip library is a good choice. This library is easy to integrate, fast, and effective components that enable developers to rapidly add advanced compression and decompression features to your application.
What is IPWorks ZIP?
IPWorks ZIP allows developers to easily integrate compression and decompression into applications using the Zip, Tar, Gzip, 7-Zip, Bzip2, ZCompress, or Jar standards for compression.
procedure TFormCreatesevenzip.btnZipClick(Sender: TObject);
var
curListItem: TListItem;
begin
try
SevenZip1.Reset;
ProgressBar1.Position := 0;
SevenZip1.ArchiveFile := txtArchiveFile.Text;
curListItem := lstLocalFiles.Selected;
if Assigned(curListItem) then
begin
while Assigned(curListItem) do
begin
SevenZip1.IncludeFiles(lblDirectory.Caption + '' +
curListItem.Caption);
curListItem := lstLocalFiles.GetNextItem(curListItem, sdAll,
[isSelected]);
end;
end;
// You can also add files to an archive using a filemask for instance:
// SevenZip1.IncludeFiles('c:*.txt');
if not(txtPassword.Text = '') then
begin
SevenZip1.Password := txtPassword.Text;
end;
SevenZip1.Compress;
ShowMessage('SevenZip complete.');
except
on E: EipzZip do
ShowMessage(E.Message);
end;
SevenZip1.ArchiveFile := ''; // Release the handle on the archive file.
end;
IPWorks Features
Streaming support during file compression/decompression
Delete individual files within an archive without decompressing the entire file.
Encryption
Support for the Open XML Packing format
PKZip-compatible Zip compressor
128-bit, 192-bit, and 256-bit AES encryption
Zip64 Archives support – 4GB+ zip files with a virtually unlimited number of files
Serial communication is a simple means of sending data to long distances quickly and reliably. Serial communication is an increasingly important aspect of embedded systems. A good understanding is essential to the aspiring designer. There are 2 broad types of serial communication:
Synchronous
Asynchronous
And there are lots of standards and protocols when it comes to serial communications. They should match the right protocol with the right applications.
What is TMS Async?
TMS Async is a communications package that provides access to the serial ports under Windows. The event-driven architecture provides the highest possible performance and allows all the tools to run in the background.
TMS Async Features
The advanced class object structure
Easy to use design interface
Optimized event-driven architecture
Links directly to your EXE, no runtime
Supports all important transfer protocols
procedure TForm1.VaCommRxBuf(Sender: TObject; Data: PVaData;
Count: Integer);
var
I: Integer;
begin
for I := 0 to Count - 1 do
case Data^[I] of
#10:;
#13: MemoIndex := Memo2.Lines.Add('');
else
begin
Memo2.Lines[MemoIndex] := Memo2.Lines[MemoIndex] + Data^[I];
Memo2.Refresh;
end;
end;
end;
procedure TForm1.VaModem21RingDetect(Sender: TObject; Rings: Integer;
var AcceptCall: Boolean);
begin
Memo1.Lines.Add('RING: ' + IntToStr(Rings));
AcceptCall := Rings >= 1;
end;
procedure TForm1.VaModem21CommandTimeout(Sender: TObject);
begin
case VaModem1.ModemAction of
maInit: Memo1.Lines.Add('Error initializing modem.');
end;
end;
Mithilfe des GetIt-Patch-Mechanismus warnt Embarcadero Kunden bei früheren RAD Studio 10.4-Versionen, dass 10.4.2 verfügbar ist, und bietet eine vereinfachte Möglichkeit zur Installation.
Seit RAD Studio 10.4 verfügt die IDE über einen Mechanismus zum Bereitstellen von Patches über GetIt und eine Option zum Benachrichtigen von Benutzern, dass Patches auf der Begrüßungsseite verfügbar sind. Da es keinen ähnlichen Mechanismus gibt, um auf eine neue Version aufmerksam zu machen, haben wir versucht, die Patches auch für die Bereitstellung der Version 10.4.2 zu verwenden.
Kurz gesagt, wenn Sie 10.4 oder 10.4.1 verwenden und ein aktives Update-Abonnement haben, sollte auf der Begrüßungsseite eine Warnung zu neuen Patches angezeigt werden. Sobald Sie den Abschnitt GetIt Package Manager-Patches geöffnet haben, sollte 10.4.2 angezeigt werden ist verfügbar:
Wenn Sie es installieren, zeigt das System nach dem Akzeptieren der EULA eine Readme-Datei (siehe unten) an, in der der Vorgang erläutert wird, und öffnet den Ordner mit der ausführbaren Datei des Installationsprogramms. Beachten Sie, dass die Installation nicht automatisch erfolgt, da durch Auswahl von In GetIt installieren nur der Patch heruntergeladen wird. Der Grund dafür ist, dass für die Installation von 10.4.2 weiterhin die aktuelle Version deinstalliert und die neue Version installiert werden muss. Der Vorteil dieses Prozesses besteht darin, dass er einen Warnmechanismus und einen vereinfachten Download direkt von der IDE (und nicht von einer externen Site) bietet.
Hinweis: Wenn Sie (aus irgendeinem Grund) nicht an der Installation von 10.4.2 interessiert sind, können Sie die Patch-Benachrichtigung „stumm schalten“, indem Sie den Patch herunterladen und nicht installieren.
Die Readme-Datei zum 10.4.2-Installationspaket
Dieses GetIt-Paket enthält das Online-Installationsprogramm für RAD Studio, Delphi und C ++ Builder 10.4 Release 2 (auch als 10.4.2 bekannt). Alternativ können Sie das ISO-Offline-Installationsprogramm vom Kundenportal unter my.embarcadero.com herunterladen .
Sie sollten RAD Studio schließen, bevor Sie das Installationsprogramm ausführen. Selbst wenn es im GetIt-Kanal „Patch“ geliefert wird, handelt es sich nicht um einen Patch, sondern um eine neue Version. Dieses Installationsprogramm deinstalliert Ihre aktuelle Version von RAD Studio und installiert die neue Version, wobei die Konfigurationseinstellungen (wenn Sie die entsprechende Option von der Standardeinstellung unberührt lassen) aus der Registrierung beibehalten werden. Sie können auch das Migrationstool ausführen, um eine Kopie Ihrer Konfiguration zu erstellen, bevor Sie das Installationsprogramm starten. Siehe http://docwiki.embarcadero.com/RADStudio/Sydney/en/Settings_Migration_Tool .
Usando el mecanismo de parche GetIt, Embarcadero está alertando a los clientes sobre versiones anteriores de RAD Studio 10.4 que 10.4.2 está disponible y ofrece una forma simplificada de instalarlo.
Desde RAD Studio 10.4, el IDE tiene un mecanismo para entregar parches a través de GetIt y una opción para alertar a los usuarios que los parches están disponibles en la página de bienvenida. Dado que no existe un mecanismo similar para alertar sobre una nueva versión, hemos intentado utilizar los parches también para entregar la versión 10.4.2.
En resumen, si está utilizando 10.4 o 10.4.1 y tiene una suscripción de actualización activa, debería ver una alerta de parches nuevos en la página de Bienvenida, y una vez que abra la sección de parches de GetIt Package Manager, debería ver que 10.4.2 está disponible:
Si lo instala, después de aceptar el EULA, el sistema mostrará un archivo Léame (ver más abajo) que explica el proceso y abre la carpeta con el ejecutable del instalador. Tenga en cuenta que la instalación no es automática, ya que seleccionar Instalar en GetIt solo descargará el parche. La razón es que la instalación de 10.4.2 aún requiere desinstalar la versión actual e instalar la nueva versión. La ventaja de este proceso es que ofrece un mecanismo de alerta y una descarga simplificada directamente desde el IDE (en lugar de desde un sitio externo).
Aviso: Si no está interesado en instalar 10.4.2 (por cualquier motivo), puede “silenciar” la notificación del parche descargando el parche y no instalándolo.
Léame del paquete de instalación 10.4.2
Este paquete GetIt contiene el instalador en línea para RAD Studio, Delphi y C ++ Builder 10.4 Release 2 (también conocido como 10.4.2). Como alternativa, puede descargar el instalador fuera de línea ISO desde el portal del cliente en my.embarcadero.com .
Debe cerrar RAD Studio antes de ejecutar el instalador. Incluso si se entrega en el canal GetIt de “parche”, no se trata de un parche, sino de una nueva versión. Este instalador procederá a desinstalar su versión actual de RAD Studio e instalará la nueva versión, conservando los ajustes de configuración (si deja la opción correspondiente intacta de su valor predeterminado) del registro. También puede considerar ejecutar la herramienta de migración para hacer una copia de su configuración antes de iniciar el instalador. Ver http://docwiki.embarcadero.com/RADStudio/Sydney/en/Settings_Migration_Tool .
Usando o mecanismo de patch GetIt, a Embarcadero está alertando os clientes em versões anteriores do RAD Studio 10.4 que o 10.4.2 está disponível e oferecendo uma maneira simplificada de instalá-lo.
Desde o RAD Studio 10.4, o IDE tem um mecanismo para entregar patches via GetIt e uma opção para alertar os usuários de que os patches estão disponíveis na página de boas-vindas. Dado que não existe um mecanismo semelhante para alertar sobre uma nova versão, tentamos usar os patches também para entregar a versão 10.4.2.
Resumindo, se você estiver usando 10.4 ou 10.4.1 e tiver uma assinatura de atualização ativa, deverá ver um alerta de novos patches na página de boas-vindas e, ao abrir a seção de patches do GetIt Package Manager, verá que 10.4.2 está disponível:
Se você instalá-lo, após aceitar o EULA, o sistema mostrará um arquivo leiame (veja abaixo) explicando o processo e abrirá a pasta com o executável do instalador. Observe que a instalação não é automática, pois selecionar Instalar no GetIt apenas fará o download do patch. O motivo é que a instalação do 10.4.2 ainda requer a desinstalação da versão atual e a instalação da nova versão. A vantagem desse processo é que ele oferece um mecanismo de alerta e um download simplificado diretamente do IDE (em vez de um site externo).
Aviso: Se você não estiver interessado em instalar o 10.4.2 (por qualquer motivo), você pode “silenciar” a notificação do patch baixando o patch e não instalando-o.
O Leiame do pacote de instalação 10.4.2
Este pacote GetIt contém o instalador online para RAD Studio, Delphi e C ++ Builder 10.4 Release 2 (também conhecido como 10.4.2). Como alternativa, você pode baixar o instalador offline ISO do portal do cliente em my.embarcadero.com .
Você deve fechar o RAD Studio antes de executar o instalador. Mesmo se entregue no canal GetIt “patch”, este não é um patch, mas sim um novo lançamento. Este instalador continuará desinstalando sua versão atual do RAD Studio e instalará a nova versão, preservando as definições de configuração (se você deixar a opção correspondente intacta de seu padrão) do registro. Você também pode considerar a execução da ferramenta de migração para fazer uma cópia de sua configuração antes de iniciar o instalador. Consulte http://docwiki.embarcadero.com/RADStudio/Sydney/en/Settings_Migration_Tool .
Используя механизм исправлений GetIt, Embarcadero предупреждает клиентов о предыдущих выпусках RAD Studio 10.4, что доступна версия 10.4.2, и предлагает упрощенный способ ее установки.
Начиная с RAD Studio 10.4, в среде IDE есть механизм доставки исправлений через GetIt и возможность предупреждать пользователей о доступных исправлениях на странице приветствия. Поскольку аналогичного механизма для оповещения о новом выпуске нет, мы попытались использовать исправления также для выпуска версии 10.4.2.
Короче говоря, если вы используете 10.4 или 10.4.1 и у вас есть активная подписка на обновления, вы должны увидеть предупреждение о новых патчах на странице приветствия, и как только вы откроете раздел патчей GetIt Package Manager, вы должны увидеть, что 10.4.2 доступен:
Если вы установите его, после принятия лицензионного соглашения система покажет файл readme (см. Ниже), объясняющий процесс, и откроет папку с исполняемым файлом установщика. Обратите внимание, что установка не является автоматической, так как при выборе «Установить» в GetIt будет загружен только патч. Причина в том, что для установки 10.4.2 по-прежнему требуется удалить текущую версию и установить новую версию. Преимущество этого процесса заключается в том, что он предлагает механизм предупреждений и упрощенную загрузку непосредственно из среды IDE (а не с внешнего сайта).
Примечание: если вы не заинтересованы в установке 10.4.2 (по какой-либо причине), вы можете «отключить» уведомление о патче, загрузив патч и не устанавливая его.
Ознакомительные сведения о пакете установщика 10.4.2
Этот пакет GetIt содержит онлайн-установщик для RAD Studio, Delphi и C ++ Builder 10.4 Release 2 (также известного как 10.4.2). В качестве альтернативы вы можете загрузить автономный установщик ISO с клиентского портала my.embarcadero.com .
Перед запуском установщика необходимо закрыть RAD Studio. Даже если он будет доставлен через канал GetIt «патч», это не патч, а скорее новый выпуск. Этот установщик продолжит удаление текущей версии RAD Studio и установит новую версию, сохранив параметры конфигурации (если вы оставите соответствующий параметр нетронутым по умолчанию) из реестра. Вы также можете рассмотреть возможность запуска инструмента миграции, чтобы сделать копию вашей конфигурации перед запуском установщика. См. Http://docwiki.embarcadero.com/RADStudio/Sydney/en/Settings_Migration_Tool .
Die RAD Studio IDE verfügt über eine Vielzahl von Tastaturkürzeln, mit denen Sie nicht mit der Maus greifen und noch schneller codieren können. Die Tastaturkürzel von RAD Studio können die Entwicklerproduktivität bei vielen regulären Jobs unterstützen, die ein Entwickler in der IDE ausführt, einschließlich:
Navigieren Sie in Ihrem Projekt innerhalb der IDE
Schnelleres Schreiben von Code durch Code-Vervollständigung, Code-Vorlagen, Makroaufzeichnung (und -wiedergabe), Code-Faltung und Bücher
Erweitertes Refactoring und einfaches Suchen und Ersetzen
Auswählen, Verschieben und Neuanordnen von Komponenten zur Entwurfszeit
Öffnen oder Auswählen bestimmter IDE-Fenster
Ausführen und Debuggen von Projekten
ToDos erstellen
Die RAD Studio IDE ist ein wirklich leistungsfähiger Code-Editor, aber bei so vielen Verknüpfungen ist es oft schwierig zu wissen, wo ich anfangen soll. Nach Gesprächen mit einer Gruppe neuer Benutzer, die kürzlich mit RAD Studio vertraut waren, habe ich diesen druckbaren Tastatur-Spickzettel erstellt, der sehr nützlich ist, um immer griffbereit zu sein oder an die Wand zu hängen. Nicht alle Tastaturkürzel sind hier enthalten, aber ich habe mich auf viele der häufig verwendeten konzentriert, die nicht spezifisch für das Betriebssystem sind. Die vollständige Anleitung finden Sie in DocWiki.
Weitere Informationen zu RAD Studio IDE-Produktivitätsverknüpfungen
Es gibt eine Reihe von Blog-Posts und Artikeln, die im Laufe der Jahre erstellt wurden und sich mit Verknüpfungen und der IDE-Produktivität befassen. Hier sind einige Favoriten, die Ihnen weiterhelfen sollen.
Delphi Fandom – das Versionen enthält, für die einige Verknüpfungen hinzugefügt wurden
Können Sie alternative Tastaturzuordnungen in RAD Studio, Delphi, C ++ Builder verwenden?
Ja, du kannst! Die RAD Studio IDE-Tastaturzuordnungen können in verschiedenen Formaten festgelegt werden, einschließlich Visual Basic und Visual Studio . Einen vollständigen Index der Tastaturzuordnungen finden Sie unter DocWiki Keyboard Mappings Index . Dies kann sehr hilfreich sein, wenn Sie regelmäßig alternative Layouts in anderer Software verwenden und an diese Tastaturzuordnung gewöhnt sind.
RAD Studio IDE tiene una gran cantidad de atajos de teclado que pueden evitar que tenga que agarrar el mouse y ayudarlo a codificar aún más rápido. Los métodos abreviados de teclado de RAD Studio pueden ayudar a la productividad del desarrollador durante muchos de los trabajos habituales que realiza un desarrollador en el IDE, incluidos,
Navegando por su proyecto dentro del IDE
Escribir código más rápido con finalización de código, plantillas de código, grabación (y reproducción) de macros, plegado de código y libros
Refactorización avanzada y búsqueda y reemplazo básicos
Seleccionar, mover y reorganizar componentes en el momento del diseño
Abrir o seleccionar ventanas IDE específicas
Ejecución y depuración de proyectos
Crear tareas pendientes
RAD Studio IDE es un editor de código realmente poderoso, pero con tantos atajos, a menudo es difícil saber por dónde empezar. Después de las discusiones con un grupo de nuevos usuarios que estaban mejorando sus habilidades con RAD Studio, he creado esta hoja de trucos de teclado imprimible que es realmente útil para tener a mano o para colocar en la pared. No todos los atajos de teclado están aquí, pero me he centrado en muchos de los más utilizados que no son específicos del sistema operativo. Para obtener la guía completa, visite DocWiki.
Más información sobre los accesos directos de productividad de RAD Studio IDE
Hay una serie de publicaciones de blog y artículos que se han creado a lo largo de los años sobre los accesos directos y la productividad del IDE. Aquí hay algunos favoritos para ayudarlo más.
Delphi Fandom – que incluye versiones, se agregaron algunos atajos
¿Puede utilizar asignaciones de teclado alternativas en RAD Studio, Delphi, C ++ Builder?
¡Sí tu puedes! Las asignaciones de teclado IDE de RAD Studio se pueden configurar en varios formatos diferentes, incluidos Visual Basic y Visual Studio . Para obtener un índice completo de asignaciones de teclado, visite DocWiki Keyboard Mappings Index . Esto realmente puede ayudar si está utilizando diseños alternativos en otro software de forma regular y está acostumbrado a la asignación de teclado.
O RAD Studio IDE possui um grande número de atalhos de teclado que podem evitar que você precise usar o mouse e ajudá-lo a codificar ainda mais rápido. Os atalhos de teclado do RAD Studio podem ajudar a produtividade do desenvolvedor durante muitos dos trabalhos regulares que um desenvolvedor faz no IDE, incluindo,
Navegando em seu projeto dentro do IDE
Escrever código mais rápido com autocompletar código, modelos de código, gravação de macro (e reprodução), dobragem de código e livros
Refatoração avançada e pesquisa e substituição básicas
Seleção, movimentação e reorganização de componentes em tempo de design
Abrindo ou selecionando janelas IDE específicas
Executando e depurando projetos
Criando ToDo’s
O RAD Studio IDE é um editor de código realmente poderoso, mas com tantos atalhos, geralmente é difícil saber por onde começar. Após discussões com um grupo de novos usuários recentemente que estavam se aprimorando para o RAD Studio, criei esta folha de referências do teclado para impressão que é realmente útil para manter por perto ou na parede. Nem todos os atalhos de teclado estão aqui, mas me concentrei em muitos dos mais usados que não são específicos do sistema operacional. Para obter o guia completo, visite DocWiki.
Aprender mais sobre os atalhos de produtividade do RAD Studio IDE
Existem várias postagens de blog e artigos que foram criados ao longo dos anos discutindo atalhos e produtividade do IDE. Aqui estão alguns favoritos para ajudá-lo ainda mais.
Delphi Fandom – que inclui versões e alguns atalhos foram adicionados
Você pode usar mapeamentos de teclado alternativos no RAD Studio, Delphi, C ++ Builder?
Sim você pode! Os mapeamentos de teclado IDE do RAD Studio podem ser definidos em vários formatos diferentes, incluindo Visual Basic e Visual Studio . Para obter um índice completo de mapeamentos de teclado, visite DocWiki Keyboard Mappings Index . Isso pode realmente ajudar se você estiver usando layouts alternativos em outro software regularmente e estiver acostumado com esses mapeamentos de teclado.
В RAD Studio IDE есть большое количество сочетаний клавиш, которые могут избавить вас от хватания мыши и помочь вам писать код еще быстрее. Сочетания клавиш RAD Studio могут повысить продуктивность разработчика при выполнении многих обычных задач, выполняемых разработчиком в среде IDE, в том числе:
Навигация по вашему проекту в среде IDE
Ускорение написания кода с автозавершением кода, шаблонами кода, записью (и воспроизведением) макросов, сворачиванием кода и книгами
Расширенный рефакторинг и базовый поиск и замена
Выбор, перемещение и перестановка компонентов во время разработки
Открытие или выбор определенных окон IDE
Запуск и отладка проектов
Создание ToDo
RAD Studio IDE — действительно мощный редактор кода, но при таком большом количестве ярлыков часто бывает трудно понять, с чего начать. После недавних обсуждений с группой новых пользователей, которые повышали квалификацию до RAD Studio, я создал эту печатную шпаргалку по клавиатуре, которую действительно полезно держать под рукой или повесить на стену. Здесь присутствуют не все сочетания клавиш, но я сосредоточился на многих из наиболее часто используемых, которые не относятся к операционной системе. Чтобы получить полное руководство, посетите DocWiki.
Дополнительные сведения о ярлыках производительности RAD Studio IDE
Существует ряд сообщений в блогах и статей, которые были созданы на протяжении многих лет, в которых обсуждаются ярлыки и производительность IDE. Вот несколько избранных, которые помогут вам в дальнейшем.
Delphi Fandom — включает версии, в которые были добавлены некоторые ярлыки
Можете ли вы использовать альтернативные раскладки клавиатуры в RAD Studio, Delphi, C ++ Builder?
Да, ты можешь! Сопоставления клавиатуры IDE RAD Studio могут быть настроены на несколько различных форматов, включая Visual Basic и Visual Studio . Полный указатель сопоставлений клавиатуры см. На сайте DocWiki Keyboard Mappings Index . Это действительно может помочь, если вы регулярно используете альтернативные раскладки в другом программном обеспечении и привыкли к этим раскладкам клавиатуры.
We have new post picks for you from the LearnCPlusPlus.org website. We listed some of the interesting posts from the last week. If you are a beginner or want to jump into C++ Builder please visit our LearnCPlusPlus.org website for the great posts from basics to professional examples, full codes, snippets, etc.
Do you want to learn to convert an image to an alpha image with the color given? How we use the clipboard in modern C++? How can we copy the clipboard of the excel table to a string grid? Want to learn how to read and how to write files by using handle-based file operations? How you can add styled skins to your UI elements on VLC and FMX?
Examples are given in the picks below, Please check! We hope you enjoy yourself with them.
Have you ever thought about automating repetitive QuickBooks accounting tasks through Delphi or C++ Builder?
If yes, this post helps you to quickly get started with QuickBooks automation development.
As a reminder, QuickBooks is accounting software for businesses.
With the nsoftware QuickBooks components, you can easily connect to your QuickBooks and can automate accounting with your programming skills.
What is QuickBooks Connectivity Component?
The QuickBooks Integrator provides easy-to-use components for QuickBooks development, facilitating tasks such as adding, updating, or retrieving customer information, vendor information, employee information, transactions, etc. The QuickBooks Integrator helps you access QuickBooks remotely with the included QBConnector Component or the free Remote Connector for QuickBooks utility.
QuickBooks Connectivity Component Features:
Uniform & Extensible Design
Fully Integrated Components
Blazing Fast Performance
Detailed documentation and hundreds of sample application to start
and more
Supported Platforms
Delphi
C++ Builder
and more
procedure TFormBillpayment.btnPayBillsClick(Sender: TObject);
var
i: integer;
begin
iqbBillPayment1.Reset();
for i := 0 to lvwBillsToPay.Items.Count - 1 do
begin
if lvwBillsToPay.Items[i].Checked then
begin
iqbBillPayment1.AppliedToCount := iqbBillPayment1.AppliedToCount + 1;
iqbBillPayment1.AppliedToRefId[iqbBillPayment1.AppliedToCount - 1] := lvwBillsToPay.Items[i].SubItems[4];
// For simplicity, pay the full amount
iqbBillPayment1.AppliedToPaymentAmount[iqbBillPayment1.AppliedToCount - 1] := lvwBillsToPay.Items[i].SubItems[2];
end;
end;
iqbBillPayment1.QBConnectionString := qbConnectionString;
iqbBillPayment1.PaymentMethod := TiqbbillpaymentPaymentMethods(cbMethod.ItemIndex);
if (rbToBePrinted.Checked and rbToBePrinted.Visible) then
iqbBillPayment1.IsToBePrinted := true;
if (rbAssignNumber.Checked and rbAssignNumber.Visible) then
iqbBillPayment1.RefNumber := txtCheckNumber.Text;
// The references to accounts and vendors in Quickbooks use IDs in place of
// names. Because these values are unique, Quickbooks can access them faster.
iqbBillPayment1.PayeeId := vendorIDs[cbVendor.ItemIndex];
if iqbBillPayment1.PaymentMethod = pmCheck then
iqbBillPayment1.BankAccountId := accountIDs[cbAccount.ItemIndex]
else
iqbBillPayment1.CreditCardId := accountIDs[cbAccount.ItemIndex];
try
Screen.Cursor := crHourglass;
iqbBillPayment1.Add();
ShowMessage('Bill payment ' + iqbBillPayment1.RefId + ' added successfully.');
RefreshBillsToPay();
except on ex:EiqbBillPayment do
ShowMessage('Error entering bill payment: ' + ex.Message);
end;
Screen.Cursor := crDefault;
end;
Sometimes Developers need to list down the known Wi-Fi Networks and its configurations from a Delphi App programmatically? Don’t know how to do. Don’t worry. MiTec’s System Information Management Suite’s helps to enumerate the Known Wi-Fi Networks , we will learn how to use the MiTeC_WLANC Component in this blog post.
Platforms: Windows.
Installation Steps:
You can easily install this Component Suite from GetIt Package Manager. The steps are as follows.
Navigate In RAD Studio IDE->Tools->GetIt Package Manager->select Components in Categories->Components->Trail -MiTec system Information Component Suite 14.3 and click Install Button.
Read the license and Click Agree All. An Information dialog saying ‘Requires a restart of RAD studio at the end of the process. Do you want to proceed? click yes and continue.
It will download the plugin and installs it. Once installed Click Restart now.
How to run the Demo app:
Navigate to the System Information Management Suite trails setup, Demos folder which is installed during Get It installation e.g) C:UsersDocumentsEmbarcaderoStudio21.0CatalogRepositoryMiTeC-14.3DemosDelphi26
Open the WLANC project in RAD studio 10.4.1 compile and Run the application.
This Demo App shows how to list down the Wi-Fi Network, enumerate among them and access its configuration properties.
Components used in MSIC WLANC Demo App:
TMiTeC_WLANC gathers information about known Wi-Fi networks and their configurations.
TListView to show the Known Wi-Fi Network and its properties.
TButton to save the listed Wi-Fi Network to a .sif file and close the application.
Implementation Details:
An Instance is created WLANC of TMiTeC_WLANC. Loop through the WLANC Record count and add Network configurations item to the list view. List down the properties such as SSID, Key, Authentication, Encryption, Adapter Name, IP address Timestamp etc. of each TWLANRecord item to list view subitems.
procedure TForm2.RefreshData;
var
i: Integer;
begin
Screen.Cursor:=crHourglass;
try
WLANC.RefreshData;
List.Items.Clear;
for i:=0 to WLANC.RecordCount-1 do
with List.Items.Add do begin
Caption:=WLANC.Records[i].SSID;
SubItems.Add(WLANC.Records[i].Key);
SubItems.Add(WLANC.Records[i].Authentication);
SubItems.Add(WLANC.Records[i].Encryption);
SubItems.Add(WLANC.Records[i].Connection);
SubItems.Add(WLANC.Records[i].AdapterName);
SubItems.Add(WLANC.Records[i].IPAddress);
SubItems.Add(DateTimeToStr(UTCToLocalDatetime(WLANC.Records[i].Timestamp)));
end;
finally
Screen.Cursor:=crDefault;
end;
end;
Display the known Wi-Fi Network properties as shown below.
It’s that simple to enumerate known Wi-Fi Networks and list its configuration details for your application. Use this MiTeC component suite and get the job done quickly.
The RAD Studio IDE has a great number of keyboard shortcuts that can save you from grabbing the mouse and help you code even faster. The RAD Studio keyboard shortcuts can help developer productivity during many of the regular jobs a developer does in the IDE including,
Navigating your project within the IDE
Writing code faster with code completion, code templates, macro recording (and playback), code folding, and books
Advanced refactoring, and basic search and replace
Selecting, moving, and rearranging components at design time
Opening or selecting specific IDE windows
Running and debugging projects
Creating ToDo’s
The RAD Studio IDE is a really powerful code editor, but with so many shortcuts, it’s often hard to know where to start. Following discussions with a group of new users recently that were up-skilling to RAD Studio, I have created this printable keyboard cheat sheet that is really useful to keep close to hand or put on the wall. Not all keyboard shortcuts are in here, but I have focused on many of the commonly used ones that are not specific to the operating system. For the complete guide, visit DocWiki.
Learning more about RAD Studio IDE Productivity Shortcuts
There are a number of blog posts and articles that have been created over the years discussing shortcuts and IDE productivity. Here are a few favorites to help you further.
Delphi Fandom – which includes versions some shortcuts were added
Can you use alternative Keyboard Mappings in RAD Studio, Delphi, C++Builder?
Yes, you can! The RAD Studio IDE Keyboard mappings can be set to a number of different formats, including Visual Basic and Visual Studio. For a full index of keyboard mappings, visit DocWiki Keyboard Mappings Index. This can really help if you are using alternative layouts in other software on a regular basis and are used to those keyboard mapping.
Using the GetIt patch mechanism, Embarcadero is alerting customers on prior RAD Studio 10.4 releases that 10.4.2 is available and offering a simplified way to install it.
Since RAD Studio 10.4, the IDE has a mechanism to deliver patches via GetIt and an option to alert users that patches are available in the Welcome page. Given there isn’t a similar mechanism to alert about a new release, we have made an attempt to use the patches also to deliver the 10.4.2 release.
In short, if you are using 10.4 or 10.4.1 and you have an active update subscription, you should see a new patches alert in the Welcome page, and once you open the GetIt Package Manager patches section, you should see that 10.4.2 is available:
If you install it, after accepting the EULA, the system will show a readme file (see below) explaining the process and open the folder with the installer executable. Notice that the installation is not automatic, as selecting Install in GetIt will only download the patch. The reason is that installation of 10.4.2 still requires uninstalling the current version and installing the new version. The advantage of this this process is that it offers an alert mechanism and a simplified download directly from the IDE (rather than from an external site).
Notice: If you are not interested in installing 10.4.2 (for any reason) you can “silence” the patch notification by downloading the patch and not installing it.
The 10.4.2 Installer Package Readme
This GetIt package contains the online installer for RAD Studio, Delphi, and C++ Builder 10.4 Release 2 (also known as 10.4.2). As an alternative, you can download the ISO, offline installer from the customer portal at my.embarcadero.com.
You should close RAD Studio before running the installer. Even if delivered in the “patch” GetIt channel, this is not a patch, but rather a new release. This installer will proceed uninstalling your current version of RAD Studio and install the new version, preserving the configuration settings (if you leave the corresponding option untouched from its default) from the registry. You can also consider running the Migration Tool to make a copy of your configuration before starting the installer. See http://docwiki.embarcadero.com/RADStudio/Sydney/en/Settings_Migration_Tool.
XGBoost is a powerful and popular implementation of the gradient boosting ensemble algorithm.
An important aspect in configuring XGBoost models is the choice of loss function that is minimized during the training of the model.
The loss function must be matched to the predictive modeling problem type, in the same way we must choose appropriate loss functions based on problem types with deep learning neural networks.
In this tutorial, you will discover how to configure loss functions for XGBoost ensemble models.
After completing this tutorial, you will know:
Specifying loss functions used when training XGBoost ensembles is a critical step, much like neural networks.
How to configure XGBoost loss functions for binary and multi-class classification tasks.
How to configure XGBoost loss functions for regression predictive modeling tasks.
Let’s get started.
A Gentle Introduction to XGBoost Loss Functions Photo by Kevin Rheese, some rights reserved.
Tutorial Overview
This tutorial is divided into three parts; they are:
XGBoost and Loss Functions
XGBoost Loss for Classification
XGBoost Loss for Regression
XGBoost and Loss Functions
Extreme Gradient Boosting, or XGBoost for short, is an efficient open-source implementation of the gradient boosting algorithm. As such, XGBoost is an algorithm, an open-source project, and a Python library.
It is designed to be both computationally efficient (e.g. fast to execute) and highly effective, perhaps more effective than other open-source implementations.
XGBoost supports a range of different predictive modeling problems, most notably classification and regression.
XGBoost is trained by minimizing loss of an objective function against a dataset. As such, the choice of loss function is a critical hyperparameter and tied directly to the type of problem being solved, much like deep learning neural networks.
The implementation allows the objective function to be specified via the “objective” hyperparameter, and sensible defaults are used that work for most cases.
Nevertheless, there remains some confusion by beginners as to what loss function to use when training XGBoost models.
We will take a closer look at how to configure the loss function for XGBoost in this tutorial.
Before we get started, let’s get setup.
XGBoost can be installed as a standalone library and an XGBoost model can be developed using the scikit-learn API.
The first step is to install the XGBoost library if it is not already installed. This can be achieved using the pip python package manager on most platforms; for example:
sudo pip install xgboost
You can then confirm that the XGBoost library was installed correctly and can be used by running the following script.
# check xgboost version
import xgboost
print(xgboost.__version__)
Running the script will print your version of the XGBoost library you have installed.
Your version should be the same or higher. If not, you must upgrade your version of the XGBoost library.
1.1.1
It is possible that you may have problems with the latest version of the library. It is not your fault.
Sometimes, the most recent version of the library imposes additional requirements or may be less stable.
If you do have errors when trying to run the above script, I recommend downgrading to version 1.0.1 (or lower). This can be achieved by specifying the version to install to the pip command, as follows:
sudo pip install xgboost==1.0.1
If you see a warning message, you can safely ignore it for now. For example, below is an example of a warning message that you may see and can ignore:
FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead.
If you require specific instructions for your development environment, see the tutorial:
The XGBoost library has its own custom API, although we will use the method via the scikit-learn wrapper classes: XGBRegressor and XGBClassifier. This will allow us to use the full suite of tools from the scikit-learn machine learning library to prepare data and evaluate models.
Both models operate the same way and take the same arguments that influence how the decision trees are created and added to the ensemble.
For more on how to use the XGBoost API with scikit-learn, see the tutorial:
Next, let’s take a closer look at how to configure the loss function for XGBoost on classification problems.
XGBoost Loss for Classification
Classification tasks involve predicting a label or probability for each possible class, given an input sample.
There are two main types of classification tasks with mutually exclusive labels: binary classification that has two class labels, and multi-class classification that have more than two class labels.
Binary Classification: Classification task with two class labels.
Multi-Class Classification: Classification task with more than two class labels.
For more on the different types of classification tasks, see the tutorial:
XGBoost provides loss functions for each of these problem types.
It is typical in machine learning to train a model to predict the probability of class membership for probability tasks and if the task requires crisp class labels to post-process the predicted probabilities (e.g. use argmax).
This approach is used when training deep learning neural networks for classification, and is also recommended when using XGBoost for classification.
The loss function used for predicting probabilities for binary classification problems is “binary:logistic” and the loss function for predicting class probabilities for multi-class problems is “multi:softprob“.
“multi:logistic“: XGBoost loss function for binary classification.
“multi:softprob“: XGBoost loss function for multi-class classification.
These string values can be specified via the “objective” hyperparameter when configuring your XGBClassifier model.
For example, for binary classification:
...
# define the model for binary classification
model = XGBClassifier(objective='binary:logistic')
And, for multi-class classification:
...
# define the model for multi-class classification
model = XGBClassifier(objective='binary:softprob')
Importantly, if you do not specify the “objective” hyperparameter, the XGBClassifier will automatically choose one of these loss functions based on the data provided during training.
We can make this concrete with a worked example.
The example below creates a synthetic binary classification dataset, fits an XGBClassifier on the dataset with default hyperparameters, then prints the model objective configuration.
# example of automatically choosing the loss function for binary classification
from sklearn.datasets import make_classification
from xgboost import XGBClassifier
# define dataset
X, y = make_classification(n_samples=1000, n_features=20, n_informative=15, n_redundant=5, random_state=1)
# define the model
model = XGBClassifier()
# fit the model
model.fit(X, y)
# summarize the model loss function
print(model.objective)
Running the example fits the model on the dataset and prints the loss function configuration.
We can see the model automatically choose a loss function for binary classification.
binary:logistic
Alternately, we can specify the objective and fit the model, confirming the loss function was used.
# example of manually specifying the loss function for binary classification
from sklearn.datasets import make_classification
from xgboost import XGBClassifier
# define dataset
X, y = make_classification(n_samples=1000, n_features=20, n_informative=15, n_redundant=5, random_state=1)
# define the model
model = XGBClassifier(objective='binary:logistic')
# fit the model
model.fit(X, y)
# summarize the model loss function
print(model.objective)
Running the example fits the model on the dataset and prints the loss function configuration.
We can see the model used to specify a loss function for binary classification.
binary:logistic
Let’s repeat this example on a dataset with more than two classes. In this case, three classes.
The complete example is listed below.
# example of automatically choosing the loss function for multi-class classification
from sklearn.datasets import make_classification
from xgboost import XGBClassifier
# define dataset
X, y = make_classification(n_samples=1000, n_features=20, n_informative=15, n_redundant=5, random_state=1, n_classes=3)
# define the model
model = XGBClassifier()
# fit the model
model.fit(X, y)
# summarize the model loss function
print(model.objective)
Running the example fits the model on the dataset and prints the loss function configuration.
We can see the model automatically chose a loss function for multi-class classification.
multi:softprob
Alternately, we can manually specify the loss function and confirm it was used to train the model.
# example of manually specifying the loss function for multi-class classification
from sklearn.datasets import make_classification
from xgboost import XGBClassifier
# define dataset
X, y = make_classification(n_samples=1000, n_features=20, n_informative=15, n_redundant=5, random_state=1, n_classes=3)
# define the model
model = XGBClassifier(objective="multi:softprob")
# fit the model
model.fit(X, y)
# summarize the model loss function
print(model.objective)
Running the example fits the model on the dataset and prints the loss function configuration.
We can see the model used to specify a loss function for multi-class classification.
multi:softprob
Finally, there are other loss functions you can use for classification, including: “binary:logitraw” and “binary:hinge” for binary classification and “multi:softmax” for multi-class classification.
Next, let’s take a look at XGBoost loss functions for regression.
XGBoost Loss for Regression
Regression refers to predictive modeling problems where a numerical value is predicted given an input sample.
Although predicting a probability sounds like a regression problem (i.e. a probability is a numerical value), it is generally not considered a regression type predictive modeling problem.
The XGBoost objective function used when predicting numerical values is the “reg:squarederror” loss function.
“reg:squarederror”: Loss function for regression predictive modeling problems.
This string value can be specified via the “objective” hyperparameter when configuring your XGBRegressor model.
For example:
...
# define the model for regression
model = XGBRegressor(objective='reg:squarederror')
Importantly, if you do not specify the “objective” hyperparameter, the XGBRegressor will automatically choose this objective function for you.
We can make this concrete with a worked example.
The example below creates a synthetic regression dataset, fits an XGBRegressor on the dataset, then prints the model objective configuration.
# example of automatically choosing the loss function for regression
from sklearn.datasets import make_regression
from xgboost import XGBRegressor
# define dataset
X, y = make_regression(n_samples=1000, n_features=20, n_informative=15, noise=0.1, random_state=7)
# define the model
model = XGBRegressor()
# fit the model
model.fit(X, y)
# summarize the model loss function
print(model.objective)
Running the example fits the model on the dataset and prints the loss function configuration.
We can see the model automatically choose a loss function for regression.
reg:squarederror
Alternately, we can specify the objective and fit the model, confirming the loss function was used.
# example of manually specifying the loss function for regression
from sklearn.datasets import make_regression
from xgboost import XGBRegressor
# define dataset
X, y = make_regression(n_samples=1000, n_features=20, n_informative=15, noise=0.1, random_state=7)
# define the model
model = XGBRegressor(objective='reg:squarederror')
# fit the model
model.fit(X, y)
# summarize the model loss function
print(model.objective)
Running the example fits the model on the dataset and prints the loss function configuration.
We can see the model used the specified a loss function for regression.
reg:squarederror
Finally, there are other loss functions you can use for regression, including: “reg:squaredlogerror“, “reg:logistic“, “reg:pseudohubererror“, “reg:gamma“, and “reg:tweedie“.
There is no such thing as a good hash function for universal hashes. Depending on the context different criteria determine the quality of a hash. To test your hash function using data drawn from the same distribution that you expect it works on. When looking at a hash function on 64-bit longs, the default hash function is excellent if the input values are drawn uniformly from all possible long values.
There are different ways to test the distribution, collision, and performance properties of non-cryptographic hash functions. The probability is not a good idea when it comes to hash collisions or distribution. For this, several libraries utilize linear cryptoanalysis or other methods.
The SMHasher is a test suite designed to test the distribution, collision, and performance properties of the non-cryptographic hash function. It aims to be the DieHarder of hash testing and does a pretty good job of finding flaws with several popular hashes.
Developers spend most of their time debugging and finding issues by analyzing the log. And there are lots of different logging frameworks available. RAD Studio developers also have several options to go. One of the best logging frameworks is TMS Logging.
What is TMS Logging?
This is a compact cross-platform logging framework offering informative log output to a flexible number of targets with a minimum amount of code.
What features does TMS Logging have?
Log to one or more output handlers such as the Console, HTML, Text file, CSV file, TCP/IP, Windows Event Log
Heavily RTTI based for comprehensive type and class logging with simple log statements
Cross-platform support
Multi-thread enabled & thread-safe
Value validation to control logging
Helper functions
Separate TCP/IP Client included viewing logger outputs remotely
Automatic exception logging
Interfaces to myCloudData.net & Exceptionless cloud logging
and more
If you need a real solid logging framework, this can be good to go. Because it offers more than you need.
Code Example
procedure TForm1.FormCreate(Sender: TObject);
begin
TMSLogger.Outputs := [loTimeStamp, loLogLevel, loValue];
TMSLogger.OutputFormats.TimeStampFormat := 'The time is {%"hh:nn:ss"dt}, ';
TMSLogger.OutputFormats.LogLevelFormat := 'the loglevel is {%s}, ';
TMSLogger.OutputFormats.ValueFormat := '{%s}';
end;
procedure TForm1.Button1Click(Sender: TObject);
var
s: string;
fmt: string;
begin
s := 'Hello World !';
fmt := 'The value is {%s}';
TMSLogger.Info(s);
TMSLogger.Error(s);
TMSLogger.WarningFormat(fmt, [s]);
TMSLogger.Trace(s);
TMSLogger.Debug(s);
end;
There’s a frank discussion going on in the software industry at the moment about the words we use and the history behind them. Perhaps now is a good time to reconsider some of our terminology. For example, I’ve noticed we have several terms that describe essentially the same kind of testing:
Golden Master
Snapshot
Characterization
Approval
I think it’s time to completely drop the first one of these. In addition, if we could all agree on just one term it could make communication easier. My preferred choice is ‘Approval Testing’. As an industry, as a community of software professionals, can we agree to change the words we use?
What kind of testing are we referring to?
The common mechanism for ‘Golden Master’, ‘Snapshot’, ‘Characterization’ and ‘Approval’ testing is that you run the software, gather the output and store it. The combination of (a) exactly how you set up and ran the software and (b) the stored output, forms the basis of a test case.
When you subsequently run the software with the same set up, you again gather the output. You then compare it against the version you previously stored in the test case. Any difference fails the test.
There are a number of testing frameworks that support this style of testing. Some open source examples:
Full disclosure: I am a contributor to both Approvals and TextTest.
Reasons for choosing the term ‘Approval Testing’
Test cases are designed by people. You decide how to run the software and what output is good enough to store and compare against later. That step where you ‘approve’ the output is crucial to the success of the test case later on. If you make a poor judgement the test might not contain all the essential aspects you want to check for, or it might contain irrelevant details. In the former situation, it might continue to pass even when the software is broken. In the latter situation, the test might fail frequently for no good reason, causing you to mistrust or even ignore it.
I like to describe this style of testing with a term that puts human design decisions front and center.
Comments on the alternative terms
Snapshot
This term draws your attention to the fact that the output you have gathered and stored for later comparison in the test is transient. It’s correct today, but it may not be correct tomorrow. That’s pretty agile – we expect the behaviour of our system to change and we want our tests to be able to keep up.
The problem with this term is that it doesn’t imply any duty of care towards the contents of the snapshot. If a test fails unexpectedly I might just assume nothing is wrong – my snapshot is simply out of date. I can replace it with the newer one. After all, I expect a snapshot to change frequently. Did I just miss finding a bug though?
I prefer to use a word that emphasizes the human judgement involved in deciding what to keep in that snapshot.
Characterization
This is a better term because it draws your attention to the content of the output you store: that it should characterize the program behaviour. You want to ensure that all the essential aspects are included, so your test will check for them. This is clearly an important part of designing the test case.
On the other hand, this term primarily describes tests written after the system is already working and finished. It doesn’t invite you to consider what the system should do or what you or others would like it to do. Approval testing is a much more iterative process where you approve what’s good enough today and expect to approve something better in the future.
Golden Master
This term comes from the record industry where the original audio for a song or album was stored on a golden disk in a special archive. All the copies in the shops were derived from it. The term implies that once you’ve decided on the correct output, and stored it in a test, it should never change. It’s so precious we should store it in a special ‘golden’ archive. It has been likened to ‘pouring concrete on your software’. That is the complete opposite of agile!
In my experience, what is correct program behaviour today will not necessarily be correct program behaviour tomorrow, and we need to update our understanding and our tests. We need to be able to ‘approve’ a new version of the output and see that as a normal part of our work.
This seems to me to be a strong enough argument for dropping the term ‘Golden Master’. If you’ve been following the recent announcement from Github around renaming the default branch to ‘main’, you’ll also be aware there are further objections to the term ‘master’. I would like to be able to communicate with all kinds of people in a respectful and friendly manner. If a particular word is problematic and a good alternative exists, I think it’s a good idea to switch.
In conclusion
Our job is literally about writing words in code and imbuing them with meaning. Using the same words to describe the same thing helps everyone to communicate better. Will you please join me in using the words ‘Approval Testing’ as an umbrella term referring to a particular style of testing? Words matter. We should choose them carefully.
If you would like to have map functionalities in your FMX and VCL applications, you should check out the advanced WebGMaps component by TMSSoftware.
What features does WebGMaps have?
With the WebGMaps component set, you can configure Google Maps in your FMX or VCL applications easily. Moreover, you can configure different map modes and extra map information can be displayed with little configurations. For instance:
Default road map
Satellite view
Hybrid view
StreetView
Terrain
Traffic information
Bicycle view and Panoramio info
begin
WebGMapsGeocoding1.Address := 'Broadway 615, LOS ANGELES, USA';
if WebGMapsGeocoding1.LaunchGeocoding = erOk then
begin
// center the map at the coordinate
WebGMaps1.MapOptions.DefaultLatitude := WebGMapsGeocoding1.ResultLatitude;
WebGMaps1.MapOptions.DefaultLongitude := WebGMapsGeocoding1.ResultLongitude;
// Add a marker for the Los Angeles theatre
WebGmaps1.Markers.Add(WebGMapsGeocoding1.ResultLatitude,
WebGMapsGeocoding1.ResultLongitude,'Broadway theatre');
// set zoom level
WebGmaps1.MapOptions.ZoomMap := 19;
// launch the display of the map
WebGMaps1.Launch;
end;
end;
Besides, you can apply, labels, markers, export to graphic files, construct routes from point to point, retrieve the longitude/latitude coordinate. To make it more convenient for developers, there are also geocoding, direction lists, time zone, and different additional components.
This is a commercial component set and you should get a license to integrate it into your applications. When you install, you will get several sample applications and complete documentation.
オブジェクトファイル(各コンパイルユニットのコンパイル済み形式、つまり各.cppファイル)は、さまざまなオブジェクトのフォーマットで保存されます。 C++BuilderのWin64では、64ビットELFを使用します。これは通常、Linuxで使用されるフォーマットです。「ELF」とは、Executable and Linkable Formatの略ですが、名前の起源は諸説ありますが、「ホビットの冒険」や「指輪物語」の著者として知られているトールキンのファンではないかと思われます。(このPDFの2ページ目をご覧ください)
すでに、10.4.2のトライアル版が利用可能になっており、今後製品を購入いただくと、10.4.2をダウンロードいただけるようになります。また、すでに製品をお持ちの方は、有効なアップデートサブスクリプションがあれば、既存のライセンスを使用してRAD Studio 10.4.2をご利用いただけます。10.4.2のダウンロードは、新しいカスタマーポータルサイト(my.embarcadero.com)から行えます。
The Software Development world is big and thousands of different technologies, standards are built for every problem. For instance, schema standards like BizTalk, SEF, Bots, and Altova to work with Electronic data interchange.
IPWorks X12 provides the ultimate versatility for developers and businesses interested in facilitating their software systems with Internet EDI (X12) parsing, translation, and message generation abilities. IPWorks X12 library is a third-party commercial component set for RAD Studio developers.
What do you get from the IPWorks X12 library?
Read and write X12 documents
Navigate documents using XPath notation
Translate X12 to and from XML or JSON
Support for all major scheme standards like BizTalk, SEF, Altova, and more
Fast, robust, secure components
Full documentation and samples
and more
procedure TFormX12translator.btnToX12Click(Sender: TObject);
var
inputString: String;
i: Integer;
begin
StatusBar1.Panels[0].Text := '';
try
x12X12Translator1.Reset();
x12X12Translator1.InputFormat := Tx12X12TranslatorInputFormats.xifXML;
x12X12Translator1.OutputFormat := Tx12X12TranslatorOutputFormats.xofX12;
if (btnXmlFile.Checked) then
x12X12Translator1.InputFile := txtXmlFile.Text
else
begin
inputString := '';
for i := 0 to tMemoXmlString.Lines.Count - 1 do
begin
inputString := inputString + tMemoXmlString.Lines[i];
end;
x12X12Translator1.InputData := inputString;
end;
if (btnX12File.Checked) then
begin
x12X12Translator1.OutputFile := txtX12File.Text;
x12X12Translator1.Overwrite := chkX12Overwrite.Checked;
end;
x12X12Translator1.Translate;
tMemoX12String.Text := x12X12Translator1.OutputData;
finally
end;
StatusBar1.Panels[0].Text := 'Translated To X12';
end;
Be sure to head over and check out the IPWorks X12 library on the GetIt portal and download it in the IDE using the GetIt Package Manager.
Alysson Cunha has been programming with Delphi since he was 13, in 2001. His showcase entry (Firecast 8) won the grand prize of the Delphi 26th Showcase Challenge and we interviewed him on everything about his winning software and how it came about. Get more information about the entertainment software on the RRPG Firecast website.
When did you start using RAD Studio/Delphi and have long have you been using it?
I started programming when I was 13 years old (2001) and Delphi 4 was the first development tool I had contact with. I can say that the possibility of “dragging controls” was something that helped a lot in the beginning of learning. Over the years, I have learned other programming languages, other tools and IDEs, but Delphi has always been on my programming tools as favorite.
What was it like building software before you had RAD Studio/Delphi?
Well, since Delphi was my first contact with the software development world, I am going to answer this question a little differently. The possibility of “dragging controls”, handling events incredibly easily and previewing the outcome before compiling gave me a huge support in my learning. A lot of people mistakenly describe that Delphi can’t create all kinds of applications, because it’s just about “dragging and dropping components”. I have also heard many times that it was an “event-driven” language / IDE only. They couldn’t be more wrong: This supposed limitation due to these features does not exist, and they still give a huge boost to those who are just starting out.
How did RAD Studio/Delphi help you create your showcase application?
See, I have a feeling that I’m in control. All the code seems to be within my reach with the code navigation tools. I browse, discover, remember, reorganize, refactor, go back and go forward in the code where I need to be in such a sure way. I was never able to reach this sensation that I cannot describe as well as I wanted to.
What made RAD Studio/Delphi stand out from other options?
Multi-platform compilation feature in a way that make possible to reuse a large part of the code I already had and keep a single codebase was the strongest reason, in addition to the tools that the IDE offers us during programming.
What made you happiest about working with RAD Studio/Delphi?
I feel like I’m starting to get repetitive, but I must point out: Single codebase: the reuse of “old” code and great IDE tools to navigate through code. What made me the happiest to work with Delphi came in something in this regard.
What have you been able to achieve through using RAD Studio/Delphi to create your showcase application?
I was been able to achieve a great code organization and the production of tons of code lines that are compiled in a great speed. Often the speed with which the code is compiled is not taken into account as much as it should. Often, the speed with which the code is compiled is not taken into account as much as it should be. With faster builds, we stay in a state of focus for longer. If I can use the term “immersion”, I would say that the speed of the compilation allowed me to achieve great immersion during the programming task using Delphi (and, of course, some coffee).
What are some future plans for your showcase application?
The application is in the final stages of development and the first official version will be released on March 31, 2021 if all goes well. From that, we will launch bimonthly releases with updates to complete the program by the end of the year and have all the features that the legacy version has. Releases for macOS64 and Linux are also in the plans, thanks to FMX. Release for Android should come next. The work wont end so soon.
Thank you, Alysson! Check out the winning showcase link below.
Der Linker ist ein Kernbestandteil der C ++ Builder-Toolchain – schließlich ist es der Teil, der die Ausgabe des Compilers sammelt und Ihre endgültige Binärdatei erstellt, sodass es schwierig ist, seine Bedeutung zu unterschätzen! – und weil es die gesamte Anwendung auf einmal zusammenbringt, kann es viel Speicher verbrauchen. C ++ Builder 10.4.2 ist nicht die erste Version, in der wir Linker-Verbesserungen eingeführt haben: Den Linker auf eine große Adresse aufmerksam machen , den Linker auf die heutigen typischen Anwendungen abstimmen und nützliche Flags dokumentieren, um das Linker-Verhalten zu optimieren, wenn Sie feststellen, dass der Linker Probleme hat Ihre Bewerbung. (Ganz zu schweigen vom Hinzufügen nützlicher Funktionen wie dem Erkennen von Mischungen aus klassischen und klirrenden Objekten Wenn Sie eine Verknüpfung erstellen, können Sie ein unsichtbares Problem feststellen, das sich auf die Laufzeitstabilität Ihrer App auswirkt. Verwenden Sie diese Option, um sicherzustellen, dass Ihre App keinen häufigen Fehler enthält und stabil ist.)
In C ++ Builder 10.4.2 haben wir den Win64-Linker Split DWARF erheblich verbessert, den wir aus der Unix-Welt für Windows bereitstellen.
Warum die ungeraden Namen?
Objektdateien – die kompilierte Form jeder Kompilierungseinheit, dh jeder CPP-Datei – werden in verschiedenen Objektformaten gespeichert. Unter Win64 verwendet C ++ Builder 64-Bit-ELF, ein Format, das normalerweise unter Linux verwendet wird. Es steht für Executable and Linkable Format, aber wir vermuten, dass es seinen wahren Ursprung in Fans von Tolkien hat – weil das zusätzliche Debug-Informationsformat zum Speichern von Debug-Informationen für jedes Objekt DWARF heißt. Dies ist definitiv ein Wortspiel (siehe Seite 2 dieses PDFs .)
Split DWARF ist nicht das Ergebnis eines D & D-Tavernenkampfes oder gar eines Calvino-Romans . Tatsächlich ist dies eine Möglichkeit, die Debug-Informationen aufzuteilen, damit der Linker nicht damit umgehen muss.
Das ist das Geheimnis. Wie kann der Speicher, den der Linker beim Verknüpfen Ihrer App benötigt, am besten reduziert werden?
Es ist fast lächerlich einfach. Geben Sie es weniger zu verknüpfen.
Debug-Informationen aufteilen
Wenn Sie Ihre Anwendung im Debug-Modus oder mit Einheiten mit aktivierten Debug-Informationen erstellen, sind die Debug-Informationen in der Regel zusammen mit dem kompilierten Code in der Objektdatei enthalten. Der Linker liest beide und erstellt die endgültige Binärdatei – Ihre EXE oder DLL – und diese enthält auch sowohl kompilierten Code als auch Debug-Informationen. Dies ist der Hauptgrund, warum eine Debug-EXE-Datei so viel größer ist als eine normale App im Release-Modus.
Split DWARF verarbeitet die Objektdatei und teilt die Debug-Informationen in eine eigene Datei auf, die nebeneinander liegt, eine Dwo-Datei. In der ursprünglichen Objektdatei verbleibt ein winziger Stub, den der Debugger lesen kann, um zu erfahren, wo sich die Debuginformationen befinden.
Um es zu aktivieren, gehen Sie in Ihren Projektoptionen zu Building> C ++ Compiler> Debugging und suchen Sie die Option „Split DWARF verwenden“.
Mach es an.
Erweitern Sie die Option (klicken Sie auf> Caret) und wählen Sie einen absoluten Pfad aus, in den die Debug-Informationen verschoben werden sollen. Erstellen Sie also einen Pfad, der kein relativer Pfad ist, z. B. beginnt er mit c: oder d:. Dies ist erforderlich, um sicherzustellen, dass der Debugger die Debug-Informationen finden kann.
Und das ist es. Weitere Informationen finden Sie hier in unserer Dokumentation . Sie können den großartigen neuen Win64-Debugger verwenden (mit großartigen Funktionen wie der Überprüfung von STL-Typen – seit diesem Blog wurde mehr hinzugefügt), während Sie Ihrem Linker viel weniger Zeit zum Verknüpfen in C ++ Builder 10.4.2 geben!
El vinculador es una parte central de la cadena de herramientas de C ++ Builder; después de todo, es la parte que recopila la salida del compilador y crea su binario final, por lo que es difícil subestimar su importancia. – y debido a que reúne toda la aplicación a la vez, puede usar mucha memoria. C ++ Builder 10.4.2 no es la primera versión en la que hemos introducido mejoras en el vinculador: hacer que el vinculador tenga en cuenta las direcciones grandes , ajustar el vinculador a las aplicaciones típicas actuales y documentar indicadores útiles para ajustar el comportamiento del vinculador si encuentra que el vinculador tiene problemas tu solicitud. (Sin mencionar la adición de funciones útiles como detectar mezclas de objetos clásicos y clang al vincular, lo que lo ayuda a detectar un problema invisible que afecta la estabilidad del tiempo de ejecución de su aplicación, es decir, use esto para asegurarse de que su aplicación no contenga un error común y sea estable).
En C ++ Builder 10.4.2, hemos realizado una gran mejora en el enlazador Win64 conocido como Split DWARF, algo que estamos trayendo a Windows desde el mundo Unix.
¿Por qué los nombres raros?
Los archivos de objeto, la forma compilada de cada unidad de compilación, es decir, cada archivo .cpp, se almacenan en diferentes formatos de objeto. En Win64, C ++ Builder utiliza ELF de 64 bits, que normalmente es un formato utilizado en Linux. Significa Formato ejecutable y enlazable, pero sospechamos que tiene su origen real en los fanáticos de Tolkien, porque el formato de información de depuración auxiliar, para contener información de depuración para cada objeto, se llama DWARF. Este definitivamente es un juego de palabras (consulte la página 2 de este PDF ).
Split DWARF no es el resultado de una pelea de taberna de D&D , ni siquiera de una novela de Calvino . De hecho, es una forma de dividir la información de depuración para que el enlazador no tenga que manejarla.
Este es el secreto. ¿Cuál es la mejor manera de reducir la cantidad de memoria que necesita el vinculador al vincular su aplicación?
Es casi ridículamente simple. Dándole menos para enlazar.
División de información de depuración
Por lo general, al compilar su aplicación en modo de depuración, o con cualquier unidad con información de depuración activada, la información de depuración se encuentra en el archivo objeto junto con el código compilado. El enlazador lee ambos y crea el binario final, su EXE o DLL, que también contiene código compilado e información de depuración. Esta es la razón principal por la que un EXE de depuración es mucho más grande que una aplicación normal en modo de lanzamiento.
Split DWARF procesa el archivo de objeto y divide la información de depuración en su propio archivo que se encuentra uno al lado del otro, un archivo .dwo. Un pequeño stub permanece en el archivo de objeto original que el depurador puede leer para saber dónde encontrar la información de depuración.
Para activarlo, en las Opciones del proyecto, vaya a Construcción> Compilador C ++> Depuración, y busque la opción “Usar DWARF dividido”.
Encenderlo.
Expanda la opción (haga clic en el símbolo de intercalación>) y elija una ruta absoluta donde debe ir la información de depuración, es decir, cree una ruta que no sea relativa, por ejemplo, que comience con c: o d :. Esto es necesario para garantizar que el depurador pueda encontrar la información de depuración.
Y eso es. Para obtener más información, puede encontrar nuestra documentación aquí . Puede usar el nuevo y genial depurador de Win64 (con características increíbles, como inspeccionar tipos de STL ; se han agregado más desde ese blog) mientras le da a su vinculador mucho menos para vincular en C ++ Builder 10.4.2.
O vinculador é uma parte central da cadeia de ferramentas do C ++ Builder – afinal, é a parte que coleta a saída do compilador e cria seu binário final, por isso é difícil subestimar sua importância! – e como reúne todo o aplicativo de uma vez, pode usar muita memória. C ++ Builder 10.4.2 não é a primeira versão em que introduzimos melhorias no linker: tornando o linker grande endereço , ajustando o linker para os aplicativos típicos de hoje e documentando sinalizadores úteis para ajustar o comportamento do linker se você encontrar seu aplicativo. (Sem mencionar a adição de recursos úteis, como a detecção de combinações de objetos clássicos e clang ao vincular, o que ajuda a detectar um problema invisível que afeta a estabilidade do tempo de execução do aplicativo – ou seja, use isso para garantir que seu aplicativo não contenha um erro comum e seja estável.)
No C ++ Builder 10.4.2, fizemos uma grande melhoria no vinculador Win64 conhecido como Split DWARF, algo que estamos trazendo do mundo Unix para o Windows.
Por que os nomes estranhos?
Os arquivos de objeto – a forma compilada de cada unidade de compilação, ou seja, cada arquivo .cpp – são armazenados em diferentes formatos de objeto. No Win64, o C ++ Builder usa ELF de 64 bits, que normalmente é um formato usado no Linux. Significa Executable and Linkable Format, mas suspeitamos que tenha sua origem real nos fãs de Tolkien – porque o formato auxiliar de informações de depuração, para armazenar informações de depuração para cada objeto, é denominado DWARF. Este é definitivamente um trocadilho (consulte a página 2 deste PDF ).
Split DWARF não é o resultado de uma briga de taberna de D&D , ou mesmo de um romance de Calvino . Na verdade, é uma maneira de dividir as informações de depuração para que o vinculador não precise lidar com elas.
Esse é o segredo. Qual é a melhor maneira de reduzir a quantidade de memória que o vinculador precisa ao vincular seu aplicativo?
É quase ridiculamente simples. Dando menos para vincular.
Dividindo informações de depuração
Normalmente, ao construir seu aplicativo no modo de depuração, ou com quaisquer unidades com informações de depuração ativadas, as informações de depuração estão contidas no arquivo-objeto junto com o código compilado. O vinculador lê e cria o binário final – seu EXE ou DLL – que também contém o código compilado e as informações de depuração. Este é o principal motivo pelo qual um EXE de depuração é muito maior do que um aplicativo de modo de lançamento normal.
Dividir DWARF processa o arquivo de objeto e divide as informações de depuração em seu próprio arquivo que fica lado a lado, um arquivo .dwo. Um pequeno fragmento permanece no arquivo de objeto original que o depurador pode ler para saber onde encontrar as informações de depuração.
Para ativá-lo, em Opções de projeto vá para Edifício> Compilador C ++> Depuração e encontre a opção “Usar DWARF Dividido”.
Ligue-o.
Expanda a opção (clique no acento circunflexo>) e escolha um caminho absoluto onde as informações de depuração devem ir – ou seja, crie um caminho que não seja relativo, por exemplo, comece com c: ou d :. Isso é necessário para garantir que o depurador possa localizar as informações de depuração.
E é isso. Para mais informações, você pode encontrar nossa documentação aqui . Você pode usar o excelente novo depurador Win64 (com recursos incríveis, como inspecionar tipos de STL – mais foram adicionados desde esse blog) enquanto dá ao seu vinculador muito menos para vincular no C ++ Builder 10.4.2!
Компоновщик — это основная часть цепочки инструментов C ++ Builder — в конце концов, это часть, которая собирает выходные данные компилятора и создает ваш окончательный двоичный файл, поэтому трудно недооценить его важность! — и поскольку он объединяет все приложение одновременно, он может использовать много памяти. C ++ Builder 10.4.2 — не первый выпуск, в котором мы ввели улучшения компоновщика: сделав компоновщик распознающим большой адрес , настроив компоновщик для современных типичных приложений и документируя полезные флаги для настройки поведения компоновщика, если вы обнаружите, что компоновщик борется с ваше приложение. (Не говоря уже о добавлении полезных функций, таких как обнаружение сочетаний классических и лязгающих объектов при связывании, что помогает выявить невидимую проблему, влияющую на стабильность времени выполнения вашего приложения, т. е. используйте это, чтобы убедиться, что ваше приложение не содержит распространенной ошибки и является стабильным.)
В C ++ Builder 10.4.2 мы значительно улучшили компоновщик Win64, известный как Split DWARF, который мы переносим в Windows из мира Unix.
Почему странные имена?
Объектные файлы — скомпилированная форма каждой единицы компиляции, то есть каждый файл .cpp — хранятся в разных объектных форматах. В Win64 C ++ Builder использует 64-битный ELF, который обычно используется в Linux. Он расшифровывается как Executable and Linkable Format, но мы подозреваем, что его истинное происхождение принадлежит поклонникам Толкина — потому что вспомогательный формат отладочной информации для хранения отладочной информации для каждого объекта называется DWARF. Это определенно игра слов (см. Страницу 2 этого PDF-файла ).
Split DWARF — это не результат драки в таверне D&D или даже романа Кальвино . Фактически, это способ разделения отладочной информации, чтобы компоновщику не приходилось ее обрабатывать.
Это секрет. Как лучше всего уменьшить объем памяти, необходимый компоновщику при связывании вашего приложения?
Это почти до смешного просто. Давая меньше ссылки.
Разделение отладочной информации
Как правило, при создании приложения в режиме отладки или с любыми модулями с включенной отладочной информацией информация об отладке содержится в объектном файле вместе со скомпилированным кодом. Компоновщик читает оба файла и создает окончательный двоичный файл — ваш EXE или DLL, который также содержит как скомпилированный код, так и отладочную информацию. Это основная причина, по которой отладочный EXE намного больше, чем обычное приложение в режиме выпуска.
Split DWARF обрабатывает объектный файл и разбивает отладочную информацию в отдельный файл, который находится рядом, файл .dwo. В исходном объектном файле остается крошечная заглушка, которую отладчик может прочитать, чтобы узнать, где найти отладочную информацию.
Чтобы включить его, в параметрах проекта выберите «Сборка»> «Компилятор C ++»> «Отладка» и найдите параметр «Использовать разделение DWARF».
Включи это.
Разверните параметр (щелкните значок>) и выберите абсолютный путь, по которому должна идти отладочная информация, то есть создайте путь, который не является относительным путем, например, начинается с c: или d :. Это необходимо для того, чтобы отладчик мог найти отладочную информацию.
Вот и все. Для получения дополнительной информации вы можете найти нашу документацию здесь . Вы можете использовать новый замечательный отладчик Win64 (с потрясающими функциями, такими как проверка типов STL — после этого блога было добавлено больше), давая вашему компоновщику гораздо меньше возможностей для связывания в C ++ Builder 10.4.2!
The linker is a core part of the C++Builder toolchain – after all, it’s the part that collects the compiler’s output and creates your final binary, so it’s hard to understate its importance! – and because it brings the entire application together at once, it can use a lot of memory. C++Builder 10.4.2 is not the first release where we’ve introduced linker improvements: making the linker large address aware, tuning the linker to today’s typical applications, and documenting useful flags for tweaking linker behaviour if you find the linker struggles with your application. (Not to mention adding useful features like detecting mixes of classic and clang objects when linking, which helps you catch an invisible issue affecting your app runtime stability – i.e. use this to ensure your app doesn’t contain a common mistake, and is stable.)
In C++Builder 10.4.2, we’ve made a big improvement to the Win64 linker known as Split DWARF, something we’re bringing to Windows from the Unix world.
Why the Odd Names?
Object files – the compiled form of each compilation unit, i.e. each .cpp file – are stored in different object formats. On Win64, C++Builder uses 64-bit ELF, which is normally a format used on Linux. It stands for Executable and Linkable Format, but we suspect has its real origin in fans of Tolkien — because the ancillary debug information format, for holding debug info for each object, is named DWARF. This one definitely is a pun (see page 2 of this PDF.)
Split DWARF is not the result of a D&D tavern fight, or even a Calvino novel. In fact, it’s a way of splitting the debug information out so the linker doesn’t have to handle it.
This is the secret. What’s the best way of reducing the amount of memory the linker needs when linking your app?
It’s almost ridiculously simple. Giving it less to link.
Splitting Debug Information
Typically, when building your application in debug mode, or with any units with debug information turned on, the debug information is contained in the object file along with the compiled code. The linker reads both and creates the final binary – your EXE or DLL – and that, too, contains both compiled code and debug information. This is the main reason a debug EXE is so much bigger than a normal release-mode app.
Split DWARF processes the object file and splits the debug information out into its own file that sits side-by-side, a .dwo file. A tiny stub remains in the original object file that the debugger can read to know where to find the debug information.
To turn it on, in your Project Options go to Building > C++ Compiler > Debugging, and find the “Use Split DWARF” option.
Turn it on.
Expand the option (click the > caret) and choose an absolute path where the debug information should go – that is, make a path that is not a relative path, e.g. begins with c: or d:. This is required to ensure the debugger can find the debug information.
And that’s it. For more info, you can find our documentation here. You can use the great new Win64 debugger (with awesome features, like inspecting STL types – more has been added since that blog) while giving your linker much less to link in C++Builder 10.4.2!
Sometimes Developers may need to identify the Firewall rules created in a windows machine from a Delphi App programmatically? Not sure how to do. Don’t worry. MiTec’s System Information Management Suite’s helps to enumerate the available Firewall rules created for a profile, we will learn how to use the MiTeC_FW Component in this blog post.
Platforms: Windows.
Installation Steps:
You can easily install this Component Suite from GetIt Package Manager. The steps are as follows.
Navigate In RAD Studio IDE->Tools->GetIt Package Manager->select Components in Categories->Components->Trail -MiTec system Information Component Suite 14.3 and click Install Button.
Read the license and Click Agree All. An Information dialog saying ‘Requires a restart of RAD studio at the end of the process. Do you want to proceed? click yes and continue.
It will download the plugin and installs it. Once installed Click Restart now.
How to run the Demo app:
Navigate to the System Information Management Suite trails setup, Demos folder which is installed during Get It installation e.g) C:UsersDocumentsEmbarcaderoStudio21.0CatalogRepositoryMiTeC-14.3DemosDelphi27
Open the FW project in RAD studio 10.4.1 compile and Run the application.
This Demo App shows how to list down the Firewall rules created and enumerate among them.
Components used in MSIC FW Demo App:
TMiTeC_FW enumerates settings and rules from Windows Firewall. It also introduces methods for rules management.
TListView to show the Firewall settings and its rules properties.
TButton to save the listed Firewall settings to a .sif file and close the application.
Implementation Details:
An Instance is created FW of TMiTeC_FW. Loop through the FW RulesCount and add firewall rules item to the list view. List down the properties such as Name, Description, AppName, ServiceName, Protocol, LocalPorts, RemotePorts, Enabled, etc. of each TRule item to list view subitems.
Using AddRule, RemoveRule, EnableRule methods you can Add, Remove, Enable firewall rules respectively.
procedure TForm2.RefreshData;
var
i: Integer;
s: string;
begin
Screen.Cursor:=crHourglass;
try
FW.RefreshData;
cbxDomain.Checked:=FW.DomainProfile;
cbxPublic.Checked:=FW.PublicProfile;
cbxPrivate.Checked:=FW.PrivateProfile;
List.Items.Clear;
for i:=0 to FW.RuleCount-1 do
with List.Items.Add do begin
Caption:=FW.Rules[i].Name;
SubItems.Add(FW.Rules[i].Description);
SubItems.Add(FW.Rules[i].AppName);
SubItems.Add(FW.Rules[i].ServiceName);
case FW.Rules[i].Protocol of
NET_FW_IP_PROTOCOL_TCP :s:='TCP';
NET_FW_IP_PROTOCOL_UDP :s:='UDP';
NET_FW_IP_PROTOCOL_ICMPv4 :s:='ICMPv4';
NET_FW_IP_PROTOCOL_ICMPv6 :s:='ICMPv6';
else s:=IntToStr(FW.Rules[i].Protocol);
end;
SubItems.Add(s);
SubItems.Add(FW.Rules[i].LocalPorts);
SubItems.Add(FW.Rules[i].RemotePorts);
SubItems.Add(FW.Rules[i].LocalAddresses);
SubItems.Add(FW.Rules[i].RemoteAddresses);
SubItems.Add(FW.Rules[i].ICMP);
case FW.Rules[i].Direction of
NET_FW_RULE_DIR_IN : s:='In';
NET_FW_RULE_DIR_OUT: s:='Out';
end;
SubItems.Add(s);
SubItems.Add(BoolToStr(FW.Rules[i].Enabled,True));
SubItems.Add(BoolToStr(FW.Rules[i].Edge,True));
case FW.Rules[i].Action of
NET_FW_ACTION_ALLOW: s:='Allow';
NET_FW_ACTION_BLOCk: s:='Block';
end;
SubItems.Add(s);
SubItems.Add(FW.Rules[i].Grouping);
SubItems.Add(FW.Rules[i].IntfTypes);
end;
finally
Screen.Cursor:=crDefault;
end;
end;
Display the available Windows firewall rules as shown below.
MiTeC_FW Demo
It’s that simple to enumerate the windows firewall rules and list its rule properties for your application. Use this MiTeC component suite and get the job done quickly.
すでに、10.4.2のトライアル版が利用可能になっており、今後製品を購入いただくと、10.4.2をダウンロードいただけるようになります。また、すでに製品をお持ちの方は、有効なアップデートサブスクリプションがあれば、既存のライセンスを使用してRAD Studio 10.4.2をご利用いただけます。10.4.2のダウンロードは、新しいカスタマーポータルサイト(my.embarcadero.com)から行えます。
すでに、10.4.2のトライアル版が利用可能になっており、今後製品を購入いただくと、10.4.2をダウンロードいただけるようになります。また、すでに製品をお持ちの方は、有効なアップデートサブスクリプションがあれば、既存のライセンスを使用してRAD Studio 10.4.2をご利用いただけます。10.4.2のダウンロードは、新しいカスタマーポータルサイト(my.embarcadero.com)から行えます。
How do Delphi, WPF .NET Framework, and Electron perform compared to each other, and what’s the best way to make an objective comparison? Embarcadero commissioned a whitepaper to investigate the differences between Delphi, WPF .NET Framework, and Electron for building Windows desktop applications. The benchmark application – a Windows 10 Calculator clone – was recreated in each framework by three Delphi Most Valuable Professionals (MVPs) volunteers, one expert freelance WPF developer, and one expert Electron freelance developer. In this blog post we are going to explore the Tool Extension metric which is part of the functionality comparison used in the whitepaper.
How can Tool Extensions be compared?
Can the framework be extended in its own language? Frameworks that require plug-ins, extensions, or modifications to be written in a different language impose costs on businesses that require altered functionality. Rather than creating the required tool from resident knowledge, businesses may have to invest time and resources to hire an external contractor or build in-house skills in that alternate language.
Delphi ships with testing software and also gives businesses the opportunity to develop tools and extensions for the framework using the same talent that builds their product (the Delphi IDE is programmed in Delphi). WPF offers testing libraries through Visual Studio, and businesses can enjoy the large third-party tool and extension environment, but may need to outsource work to build their own extensions or invest in talent for non-WPF languages. Electron lacks a native IDE, giving businesses a choice but also removing some conveniences like integrated compilation and included testing libraries. Businesses developing in-house tools would have a more difficult time with Electron than the other frameworks.
Let’s take a look at each framework.
What tool extension capabilities are available for Delphi?
The RAD Studio IDE for Delphi is written in Delphi. Users can build their own extensions and tools in Delphi, eliminating the need to learn a new language and handle language boundary problems. Additionally, extensions and tools can be built in C++ via the C++Builder side of RAD Studio.
RAD Studio has a powerful API allowing you to extend or modify the IDE’s behavior. Create a package or DLL plugin that adds new tool windows, draws in the code editor, provides code completion, adds new project types, file types and highlighting, hooks into high-level and low-level events, tracks processes, and threads while debugging, and more.
There is a rich ecosystem of both open and closed source add-ons. There are a number of add-ons available directly via Embarcadero GetIt in the IDE.
The following is an excerpt from the Extending the Delphi IDE whitepaper by Bruno Fierens and Embarcadero.
What is the basic architecture of the Delphi IDE API?
The API is heavily based on interfaces. The interfaces typically start with the prefix IOTA or INTA. The IDE exposes a lot of interfaces that can be called from the plugin; conversely, the IDE itself can also call code from the plugin when a specific action is triggered in the IDE. To inform the IDE that the plugin has a handler for these actions, in most cases, this is done by writing a class descending from TNotifierObject that implements an interface and register the class with the IDE. As a plugin writer, you will find yourself mostly writing code that calls the IDE interfaces and write classes that implement interfaces that will be called from the IDE.
What areas of the RAD Studio IDE can be extended?
The Delphi IDE can be extended in many ways. This is a brief overview of the most common areas of extending the IDE:
Create and add custom docking panels It is possible to add custom docking panels like the component palette panel, the object inspector panel etc…
Interact with the code editor Interfaces are offered to programmatically manipulate the Delphi IDE code editor; for example, to insert snippets of code, replace text, handle special key sequences, add custom syntax highlighters and more…
Interact with Code Insight Code Insight in the editor can be customized as well, offering custom help texts on specific constructs in the code.
Interact with the Project Manager The IDE allows you to have custom context menus to projects and files in the IDE Project Manager tool panel.
Add custom wizards, items to the repository It is possible to add custom items or start custom wizards from items added to the Delphi repository. From these wizards, new project types, specific form types, or data modules can be created.
Interact with ToDo items An API is also available to interact with ToDo items in code from a Delphi IDE extension.
Interact with debugger, create custom debugger visualizers In newer Delphi versions, an IDE extension can be added that provides a custom display of a specific data type while debugging.
Interact with the form designer From a Delphi plugin, an API is available to interact with the form designer as well.
Splash screen notifications An interface is provided to add custom text on the splash screen during the startup of the IDE.
What tool extension capabilities are available for WPF .NET Framework?
Visual Studio, the native WPF IDE, can be extended in a number of ways and in multiple languages. Macros are written in Visual Basic, Add-Ins are written in .NET, and Packages can be written in .NET, C#, C++, or Visual Basic. Because WPF is written in XAML and ties into a C# logical back-end, businesses might not have in-house experience to build tools they need to enhance their development environments without out-sourcing the work or investing in training.
According to Microsoft, Visual Studio allows extending “menus, toolbars, commands, windows, solutions, projects, editors, and so on.” Additionally, it lists the following common items which can be extended.
Extending Menus and Commands
Extending and Customizing Tool Windows
Editor and Language Service Extensions
Extending Projects
Extending User Settings and Options
Extending Properties and the Property Window
Extending Other Parts of Visual Studio
Visual Studio Isolated Shell
Find out more about extending Visual Studio here: https://docs.microsoft.com/en-us/visualstudio/extensibility/starting-to-develop-visual-studio-extensions?view=vs-2019
What tool extension capabilities are available for Electron?
Electron lacks a native IDE but can use plug-ins available in IDEs such as Visual Studio Code. Additional Electron tools might have to be developed in-house from scratch or integrated with a third-3rd party tool such as Visual Studio Code. There are a large number of open source projects around tooling and functionality for Electron.
A popular editor used with Electron is Visual Studio Code. Other popular editors are Atom, Sublime Text, NotePad++, and other text editors. A lot of these text editors including VS Code support extensions but each one is uniquely different and therefor the extensions for Electron are scattered around and of varying quality.
Some of these tools include:
Electron Builder
Electron Snippets
Electron Build Tools
In conclusion, we have looked at tool extension capabilities in Delphi, WPF .NET Framework, and Electron tooling. Delphi provides the broadest tool extension capabilities with significant long term history behind the existing tools. It can be difficult to built tool extensions for WPF .NET Framework as in-house experience to build tools may not be available. Additionally, as WPF .NET Framework is a legacy framework according to Microsoft businesses may not want to allocate budget to supporting it. Electron is only a framework and therefor doesn’t have the same tool extension system that an integrated IDE like Delphi / RAD Studio and Visual Studio provide. The text editors that do support Electron each have their own unique plugin systems. Overall Delphi / RAD Studio provides the richest tool extension ecosystem.
Ready to explore all the metrics in the “Discovering The Best Developer Framework Through Benchmarking” whitepaper?
We have been exploring some of the best IoT solutions with Delphi and C++ Builder in the last posts. For instance, the Heart Rate Monitor IoT component pack which provides heart rate measurements for IoT devices that utilize the standard GATT-based heart rate service.
If you have been on the GetIt portal you have seen there are dozens of IoT components available for RAD Studio. And one of them is the Zephyr HxM Smart by Zephyr.
By this, you can connect to these IoT devices quickly by writing less code. All the modules are provided to you when you install the library and there is a complete demo application to show you how to utilize the component itself.
Iot.Device.ZephyrHeartRateMonitor,
Iot.Device.ZephyrHeartRateMonitorHelper,
Iot.Device.ZephyrHeartRateMonitorTypes
These are the main units for the Zephyr HxM Smart by Zephyr which help you to communicate with the hardware.
function TZephyrHeartRateMonitor.GetSoftwareRevision: string;
begin
if Device <> nil then
Result := Device.SoftwareRevision
else
Result := Default(string);
end;
function TZephyrHeartRateMonitor.GetManufacturerName: string;
begin
if Device <> nil then
Result := Device.ManufacturerName
else
Result := Default(string);
end;
function TZephyrHeartRateMonitor.GetIEEERegulatory: TGattHealthInformatics20601;
begin
if Device <> nil then
Result := Device.IEEERegulatory
else
Result := Default(TGattHealthInformatics20601);
end;
function TZephyrHeartRateMonitor.GetPnPID: TGattPnPID;
begin
if Device <> nil then
Result := Device.PnPID
else
Result := Default(TGattPnPID);
end;
function TZephyrHeartRateMonitor.GetBatteryLevel: Byte;
begin
if Device <> nil then
Result := Device.BatteryLevel
else
Result := Default(Byte);
end;
Be sure to head over and check out the Zephyr Heart Rate Monitor IoT component pack on the GetIt portal and download it from the IDE using the GetIt Package Manager.
C++Builder 10.4.2 brings some great features we believe will really help you — the biggest being ‘split DWARF’, a way to reduce memory usage in the linker by removing debug information. If you have projects that push the linker’s limits, check it out: it may solve your problems (see this blog post.) However, RAD Studio 10.4.2 overall was also very much a ‘quality release’. In fact, despite 10.4.1 being the release aimed at quality and 10.4.2 at features you need, we fixed more issues in 10.4.2 than in 10.4.1!
And C++Builder is no exception.
C++ Exception Handling
This wonderful pun introduces the exception handling work we’ve done in 10.4.2. If this is too long, here’s the TLDR: 10.4.2 gives your apps very high stability and more correct behaviour when handling exceptions.
We analyse categories of issue reports we get, and also do a lot of work that helps us find issues internally. Some of that work is through supporting C++ libraries: using external code is a good way to ensure our toolchain is compatible. Because of those analyses, in 10.4.2 we revised much of our exception handling for Windows.
The scenarios we looked at are:
In-module exceptions, when an exception is thrown and caught in the same binary, such as all within one EXE.
Cross-module exceptions, when an exception crosses a module boundary, such as being thrown in a DLL but caught in an EXE. This is a more difficult situation to handle, and coding guidelines indicate that no exceptions should leak out of a module into another… but, we see code where this occurs and it’s an important scenario to tackle. It is common with packages, or when multiple DLLs and an EXE are bundled together as an app.
Cross-language exceptions, when an exception crosses stack frames belonging to both Delphi and C++. Exceptions can be raised in one language and caught in another, or cross the boundary multiple times.
When all modules (eg both an EXE and DLL) are statically linked, or all modules are dynamically linked (dynamic RTL.)
OS, C++, and SEH exceptions
Both Win32 and Win64 platforms.
Many of these scenarios, especially cross-module with different linking, can get complex. One of the main reasons is handling the deallocation of an exception or exception metadata in the RTL. For example, suppose a DLL, which is fully statically linked and has its own copy of the RTL, throws an exception. How can an EXE, which is also statically linked with its own copy of the RTL, or is dynamically linked but therefore still has a different copy of the RTL to the DLL, handle freeing memory associated with the exception?
Yet in 10.4.2 we do handle those scenarios, and support applications where all modules are statically linked, or all are dynamically linked. We do not support cross-module exceptions in mixes of dynamic/static RTL within the one application.
This means that in 10.4.2 you should see significantly improved exception handling behavior and a large number of quality issues resolved for in-module exceptions, cross-module exceptions, where modules are all statically or all dynamically linked, for OS, C++ and SEH exceptions, and across both Win32 and Win64 – a massive test matrix.
With every release we aim to steadily improve C++Builder, and 10.4.2 is – one could say – exceptional.
C ++ Builder 10.4.2 bietet einige großartige Funktionen, von denen wir glauben, dass sie Ihnen wirklich helfen werden – die größte davon ist “ Split DWARF „, eine Möglichkeit, die Speichernutzung im Linker durch Entfernen von Debug-Informationen zu reduzieren. Wenn Sie Projekte haben, die die Grenzen des Linkers überschreiten, probieren Sie es aus: Es kann Ihre Probleme lösen (siehe diesen Blog-Beitrag ). RAD Studio 10.4.2 war jedoch insgesamt auch eine „Qualitätsversion“. Obwohl 10.4.1 die Version ist, die auf Qualität und 10.4.2 auf die von Ihnen benötigten Funktionen abzielt, haben wir in 10.4.2 mehr Probleme behoben als in 10.4.1!
Und C ++ Builder ist keine Ausnahme.
C ++ – Ausnahmebehandlung
Dieses wunderbare Wortspiel führt in die Ausnahmebehandlung ein, die wir in 10.4.2 durchgeführt haben. Wenn dies zu lang ist, bietet die TLDR: 10.4.2 Ihren Apps eine sehr hohe Stabilität und ein korrekteres Verhalten bei der Behandlung von Ausnahmen.
Wir analysieren Kategorien von Problemberichten, die wir erhalten, und leisten viel Arbeit, um Probleme intern zu finden. Ein Teil dieser Arbeit erfolgt durch die Unterstützung von C ++ – Bibliotheken : Die Verwendung von externem Code ist ein guter Weg, um sicherzustellen, dass unsere Toolchain kompatibel ist. Aufgrund dieser Analysen haben wir in 10.4.2 einen Großteil unserer Ausnahmebehandlung für Windows überarbeitet.
Die Szenarien, die wir uns angesehen haben, sind:
Ausnahmen im Modul , wenn eine Ausnahme ausgelöst und in derselben Binärdatei abgefangen wird, z. B. alle innerhalb einer EXE-Datei.
Modulübergreifende Ausnahmen , wenn eine Ausnahme eine Modulgrenze überschreitet, z. B. wenn sie in einer DLL ausgelöst, aber in einer EXE-Datei abgefangen wird. Dies ist eine schwierigere Situation, und die Codierungsrichtlinien weisen darauf hin, dass keine Ausnahmen aus einem Modul in ein anderes gelangen sollten. Wir sehen jedoch Code, wo dies auftritt, und es ist ein wichtiges Szenario, das angegangen werden muss. Dies ist bei Paketen üblich oder wenn mehrere DLLs und eine EXE-Datei als App gebündelt sind.
Sprachübergreifende Ausnahmen , wenn eine Ausnahme Stapelrahmen überschreitet, die sowohl zu Delphi als auch zu C ++ gehören. Ausnahmen können in einer Sprache ausgelöst und in einer anderen abgefangen werden oder die Grenze mehrmals überschreiten.
Wenn alle Module (z. B. sowohl eine EXE- als auch eine DLL-Datei) statisch verknüpft sind oder alle Module dynamisch verknüpft sind (dynamische RTL).
OS-, C ++ – und SEH- Ausnahmen
Sowohl Win32- als auch Win64- Plattformen.
Viele dieser Szenarien, insbesondere modulübergreifende Szenarien mit unterschiedlichen Verknüpfungen, können komplex werden. Einer der Hauptgründe ist die Behandlung der Freigabe einer Ausnahme oder von Ausnahmemetadaten in der RTL. Angenommen, eine DLL, die vollständig statisch verknüpft ist und über eine eigene Kopie der RTL verfügt, löst eine Ausnahme aus. Wie kann eine EXE-Datei, die auch statisch mit ihrer eigenen Kopie der RTL verknüpft ist oder dynamisch verknüpft ist, aber dennoch eine andere Kopie der RTL als die DLL hat, mit der Freigabe des mit der Ausnahme verbundenen Speichers umgehen?
In 10.4.2 behandeln wir diese Szenarien und unterstützen Anwendungen, bei denen alle Module statisch oder alle dynamisch verknüpft sind. Wir unterstützen keine modulübergreifenden Ausnahmen in Mischungen aus dynamischer / statischer RTL innerhalb einer Anwendung.
Dies bedeutet, dass Sie in 10.4.2 ein deutlich verbessertes Verhalten bei der Ausnahmebehandlung und eine große Anzahl von Qualitätsproblemen sehen sollten, die für modulinterne Ausnahmen, modulübergreifende Ausnahmen, bei denen alle Module statisch oder alle dynamisch verknüpft sind, für OS, C ++ und SEH behoben wurden Ausnahmen und sowohl über Win32 als auch über Win64 – eine massive Testmatrix.
Mit jeder Version wollen wir C ++ Builder stetig verbessern, und 10.4.2 ist – man könnte sagen – außergewöhnlich.
C ++ Builder 10.4.2 trae algunas características geniales que creemos que realmente lo ayudarán, la más grande es ‘ split DWARF ‘, una forma de reducir el uso de memoria en el vinculador al eliminar la información de depuración. Si tiene proyectos que superan los límites del vinculador, compruébelo: puede resolver sus problemas (consulte esta publicación de blog ). Sin embargo, RAD Studio 10.4.2 en general también fue en gran medida una ‘versión de calidad’. De hecho, a pesar de que 10.4.1 es la versión dirigida a la calidad y 10.4.2 a las características que necesita, ¡solucionamos más problemas en 10.4.2 que en 10.4.1!
Y C ++ Builder no es una excepción.
Manejo de excepciones de C ++
Este maravilloso juego de palabras presenta el trabajo de manejo de excepciones que hicimos en 10.4.2. Si es demasiado largo, aquí está el TLDR: 10.4.2 brinda a sus aplicaciones una estabilidad muy alta y un comportamiento más correcto al manejar excepciones.
Analizamos las categorías de informes de problemas que recibimos y también hacemos mucho trabajo que nos ayuda a encontrar problemas internamente. Parte de ese trabajo se realiza mediante el soporte de bibliotecas de C ++ : el uso de código externo es una buena manera de garantizar que nuestra cadena de herramientas sea compatible. Debido a esos análisis, en 10.4.2 revisamos gran parte de nuestro manejo de excepciones para Windows.
Los escenarios que analizamos son:
Excepciones en el módulo , cuando se lanza una excepción y se captura en el mismo binario, como todo dentro de un EXE.
Excepciones entre módulos , cuando una excepción cruza el límite de un módulo, como ser lanzada en una DLL pero atrapada en un EXE. Esta es una situación más difícil de manejar, y las pautas de codificación indican que no se deben filtrar excepciones de un módulo a otro … pero vemos código donde esto ocurre y es un escenario importante a abordar. Es común con los paquetes, o cuando se combinan varios archivos DLL y un EXE como una aplicación.
Excepciones entre lenguajes , cuando una excepción cruza marcos de pila que pertenecen tanto a Delphi como a C ++. Las excepciones pueden plantearse en un idioma y detectarse en otro, o cruzar el límite varias veces.
Cuando todos los módulos (por ejemplo, un EXE y DLL) están vinculados estáticamente , o todos los módulos están vinculados dinámicamente (RTL dinámico).
Excepciones de OS, C ++ y SEH
Ambas plataformas Win32 y Win64 .
Muchos de estos escenarios, especialmente los de módulos cruzados con diferentes enlaces, pueden volverse complejos. Una de las principales razones es manejar la desasignación de una excepción o metadatos de excepción en el RTL. Por ejemplo, supongamos que una DLL, que está completamente vinculada estáticamente y tiene su propia copia de RTL, lanza una excepción. ¿Cómo puede un EXE, que también está vinculado estáticamente con su propia copia de RTL, o está vinculado dinámicamente, pero por lo tanto todavía tiene una copia diferente de RTL a la DLL, manejar la liberación de memoria asociada con la excepción?
Sin embargo, en 10.4.2 manejamos esos escenarios y admitimos aplicaciones en las que todos los módulos están vinculados estáticamente o todos están vinculados dinámicamente. No admitimos excepciones entre módulos en mezclas de RTL dinámicas / estáticas dentro de una aplicación.
Esto significa que en 10.4.2 debería ver un comportamiento de manejo de excepciones significativamente mejorado y una gran cantidad de problemas de calidad resueltos para excepciones en el módulo, excepciones entre módulos, donde los módulos están todos vinculados estáticamente o dinámicamente, para OS, C ++ y SEH excepciones, y en Win32 y Win64, una matriz de prueba masiva.
Con cada lanzamiento, nuestro objetivo es mejorar constantemente C ++ Builder, y se podría decir que 10.4.2 es excepcional.
O C ++ Builder 10.4.2 traz alguns ótimos recursos que acreditamos que realmente irão ajudá-lo – o maior sendo ‘ dividir DWARF ‘, uma maneira de reduzir o uso de memória no vinculador removendo informações de depuração. Se você tem projetos que ultrapassam os limites do vinculador, verifique: isso pode resolver seus problemas (consulte esta postagem do blog .) No entanto, o RAD Studio 10.4.2 no geral também foi um ‘lançamento de qualidade’. Na verdade, apesar de 10.4.1 ser a versão voltada para a qualidade e 10.4.2 para os recursos de que você precisa, corrigimos mais problemas em 10.4.2 do que em 10.4.1!
E o C ++ Builder não é exceção.
Tratamento de exceções C ++
Este trocadilho maravilhoso apresenta o trabalho de tratamento de exceções que fizemos em 10.4.2. Se for muito longo, aqui está o TLDR: 10.4.2 oferece aos seus aplicativos uma estabilidade muito alta e um comportamento mais correto ao lidar com exceções.
Analisamos categorias de relatórios de problemas que recebemos e também fazemos muito trabalho que nos ajuda a encontrar problemas internamente. Parte desse trabalho é por meio do suporte a bibliotecas C ++ : usar código externo é uma boa maneira de garantir que nossa cadeia de ferramentas seja compatível. Por causa dessas análises, em 10.4.2 revisamos muito de nosso tratamento de exceções para Windows.
Os cenários que observamos são:
Exceções no módulo , quando uma exceção é lançada e capturada no mesmo binário, como tudo em um EXE.
Exceções de módulo cruzado , quando uma exceção cruza um limite de módulo, como sendo lançada em uma DLL, mas capturada em um EXE. Esta é uma situação mais difícil de lidar, e as diretrizes de codificação indicam que nenhuma exceção deve vazar de um módulo para outro … mas vemos o código onde isso ocorre e é um cenário importante a ser enfrentado. É comum com pacotes ou quando várias DLLs e um EXE são agrupados como um aplicativo.
Exceções entre linguagens , quando uma exceção cruza frames de pilha pertencentes a Delphi e C ++. As exceções podem ser levantadas em um idioma e detectadas em outro ou cruzar os limites várias vezes.
Quando todos os módulos (por exemplo, EXE e DLL) estão estaticamente vinculados ou todos os módulos estão dinamicamente vinculados (RTL dinâmico).
Exceções de sistema operacional, C ++ e SEH
Ambos Win32 e Win64 plataformas.
Muitos desses cenários, especialmente entre módulos com links diferentes, podem se tornar complexos. Um dos principais motivos é lidar com a desalocação de uma exceção ou metadados de exceção no RTL. Por exemplo, suponha que uma DLL, que está totalmente vinculada estaticamente e tem sua própria cópia do RTL, lance uma exceção. Como um EXE, que também está estaticamente vinculado à sua própria cópia do RTL, ou dinamicamente vinculado mas, portanto, ainda tem uma cópia diferente do RTL para a DLL, pode lidar com a liberação de memória associada à exceção?
No entanto, em 10.4.2, lidamos com esses cenários e suportamos aplicativos em que todos os módulos são vinculados estaticamente ou todos estão dinamicamente vinculados. Não oferecemos suporte a exceções entre módulos em combinações de RTL dinâmico / estático em um único aplicativo.
Isso significa que em 10.4.2 você deve ver um comportamento de tratamento de exceções significativamente melhorado e um grande número de problemas de qualidade resolvidos para exceções no módulo, exceções entre módulos, onde os módulos são todos estáticos ou dinamicamente vinculados, para OS, C ++ e SEH exceções, e em Win32 e Win64 – uma matriz de teste massiva.
Com cada lançamento, pretendemos melhorar constantemente o C ++ Builder, e o 10.4.2 é – pode-se dizer – excepcional.
C ++ Builder 10.4.2 предлагает несколько замечательных функций, которые, как мы полагаем, действительно помогут вам — самая большая из них — « разделенный DWARF », способ уменьшить использование памяти в компоновщике путем удаления отладочной информации. Если у вас есть проекты, которые выходят за пределы возможностей компоновщика, проверьте это: он может решить ваши проблемы (см. Этот пост в блоге ). Однако в целом RAD Studio 10.4.2 также была «качественной версией». На самом деле, несмотря на то, что 10.4.1 — это выпуск, нацеленный на качество, а 10.4.2 — на необходимые вам функции, мы исправили больше проблем в 10.4.2, чем в 10.4.1!
И C ++ Builder не исключение.
Обработка исключений C ++
Этот замечательный каламбур знакомит с работой по обработке исключений, которую мы проделали в 10.4.2. Если это слишком долго, вот TL; DR: 10.4.2 дает вашим приложениям очень высокую стабильность и более правильное поведение при обработке исключений.
Мы анализируем категории отчетов о проблемах, которые получаем, а также выполняем много работы, которая помогает нам находить проблемы внутри компании. Часть этой работы осуществляется за счет поддержки библиотек C ++ : использование внешнего кода — хороший способ гарантировать совместимость нашей инструментальной цепочки. Благодаря этому анализу в 10.4.2 мы пересмотрели большую часть нашей обработки исключений для Windows.
Мы рассмотрели следующие сценарии:
Исключения внутри модуля , когда исключение генерируется и перехватывается в одном и том же двоичном файле, например, все в одном EXE.
Межмодульные исключения , когда исключение пересекает границу модуля, например, было выброшено в DLL, но перехвачено в EXE. Это более сложная ситуация для обработки, и рекомендации по кодированию указывают, что исключения не должны просачиваться из модуля в другой… но мы видим код, в котором это происходит, и это важный сценарий, который необходимо решить. Это обычное явление для пакетов или когда несколько библиотек DLL и EXE объединены в одно приложение.
Межъязыковые исключения , когда исключение пересекает фреймы стека, принадлежащие как Delphi, так и C ++. Исключения могут возникать на одном языке и перехватываться на другом или многократно пересекать границу.
Когда все модули (например, EXE и DLL) статически связаны или все модули динамически связаны (динамическое RTL).
ОС, C ++ и исключения SEH
Обе платформы Win32 и Win64 .
Многие из этих сценариев, особенно кросс-модульные с различным связыванием, могут быть сложными. Одна из основных причин — обработка освобождения исключения или метаданных исключения в RTL. Например, предположим, что библиотека DLL, которая полностью статически связана и имеет собственную копию RTL, выдает исключение. Как может EXE, который также статически связан со своей собственной копией RTL или динамически связан, но, следовательно, по-прежнему имеет другую копию RTL для DLL, обрабатывать освобождение памяти, связанной с исключением?
Тем не менее, в 10.4.2 мы обрабатываем эти сценарии и поддерживаем приложения, в которых все модули статически связаны или все динамически связаны. Мы не поддерживаем межмодульные исключения в сочетаниях динамического / статического RTL в одном приложении.
Это означает, что в 10.4.2 вы должны увидеть значительно улучшенное поведение обработки исключений и большое количество проблем с качеством, решенных для внутримодульных исключений, межмодульных исключений, когда все модули статически или все динамически связаны, для ОС, C ++ и SEH. исключения, и как для Win32, так и для Win64 — массивная тестовая матрица.
С каждым выпуском мы стремимся постоянно улучшать C ++ Builder, и 10.4.2, можно сказать, является исключительным.
Wie verhalten sich Delphi, WPF .NET Framework und Electron im Vergleich zueinander und wie lässt sich ein objektiver Vergleich am besten durchführen? Embarcadero gab ein Whitepaper in Auftrag , um die Unterschiede zwischen Delphi, WPF .NET Framework und Electron beim Erstellen von Windows-Desktopanwendungen zu untersuchen. Die Benchmark-Anwendung – ein Windows 10 Calculator-Klon – wurde in jedem Framework von drei freiwilligen Mitarbeitern von Delphi Most Valuable Professionals (MVPs), einem freiberuflichen WPF-Experten und einem freiberuflichen Electron-Entwickler neu erstellt. In diesem Blog-Beitrag werden wir die Tool-Erweiterungsmetrik untersuchen, die Teil des im Whitepaper verwendeten Funktionsvergleichs ist.
Wie können Tool Extensions verglichen werden?
Kann das Framework in einer eigenen Sprache erweitert werden? Frameworks, bei denen Plug-Ins, Erweiterungen oder Änderungen in einer anderen Sprache geschrieben werden müssen, verursachen Kosten für Unternehmen, die geänderte Funktionen benötigen. Anstatt das erforderliche Tool aus dem Wissen der Bewohner zu erstellen, müssen Unternehmen möglicherweise Zeit und Ressourcen investieren, um einen externen Auftragnehmer einzustellen oder interne Fähigkeiten in dieser alternativen Sprache aufzubauen.
Delphi wird mit Testsoftware geliefert und bietet Unternehmen die Möglichkeit, Tools und Erweiterungen für das Framework mit demselben Talent zu entwickeln, mit dem auch ihr Produkt erstellt wurde (die Delphi-IDE ist in Delphi programmiert). WPF bietet Testbibliotheken über Visual Studio an, und Unternehmen können die große Tool- und Erweiterungsumgebung von Drittanbietern nutzen, müssen jedoch möglicherweise die Arbeit auslagern, um ihre eigenen Erweiterungen zu erstellen, oder in Talente für Nicht-WPF-Sprachen investieren. Electron fehlt eine native IDE, die Unternehmen die Wahl lässt, aber auch einige Annehmlichkeiten wie integrierte Kompilierung und Testbibliotheken entfernt. Unternehmen, die interne Tools entwickeln, hätten es mit Electron schwerer als die anderen Frameworks.
Werfen wir einen Blick auf jedes Framework.
Welche Tool-Erweiterungsfunktionen stehen für Delphi zur Verfügung?
Die RAD Studio IDE für Delphi ist in Delphi geschrieben. Benutzer können in Delphi ihre eigenen Erweiterungen und Tools erstellen, sodass keine neue Sprache mehr erlernt und Probleme mit Sprachgrenzen behoben werden müssen. Darüber hinaus können Erweiterungen und Tools in C ++ über die C ++ Builder-Seite von RAD Studio erstellt werden.
RAD Studio verfügt über eine leistungsstarke API, mit der Sie das Verhalten der IDE erweitern oder ändern können. Erstellen Sie ein Paket- oder DLL-Plugin, das neue Werkzeugfenster hinzufügt, im Code-Editor zeichnet, Code vervollständigt, neue Projekttypen, Dateitypen und Hervorhebungen hinzufügt, Ereignisse auf hoher und niedriger Ebene einbindet, Prozesse und Threads während des Debuggens verfolgt , und mehr.
Es gibt ein reichhaltiges Ökosystem an Open- und Closed-Source-Add-Ons. Es gibt eine Reihe von Add-Ons, die direkt über Embarcadero GetIt in der IDE verfügbar sind .
Das Folgende ist ein Auszug aus dem Whitepaper Extending the Delphi IDE von Bruno Fierens und Embarcadero.
Was ist die grundlegende Architektur der Delphi IDE API?
Die API basiert stark auf Schnittstellen. Die Schnittstellen beginnen normalerweise mit dem Präfix IOTA oder INTA. Die IDE stellt viele Schnittstellen bereit, die vom Plugin aus aufgerufen werden können. Umgekehrt kann die IDE selbst auch Code vom Plugin aufrufen, wenn eine bestimmte Aktion in der IDE ausgelöst wird. Um die IDE darüber zu informieren, dass das Plugin über einen Handler für diese Aktionen verfügt, wird in den meisten Fällen eine von TNotifierObject absteigende Klasse geschrieben, die eine Schnittstelle implementiert und die Klasse bei der IDE registriert. Als Plugin-Writer schreiben Sie hauptsächlich Code, der die IDE-Schnittstellen aufruft, und schreiben Klassen, die Schnittstellen implementieren, die von der IDE aufgerufen werden.
Welche Bereiche der RAD Studio IDE können erweitert werden?
Die Delphi-IDE kann auf viele Arten erweitert werden. Dies ist eine kurze Übersicht über die häufigsten Bereiche der IDE-Erweiterung:
Erstellen und Hinzufügen von benutzerdefinierten Docking-Bedienfeldern Es ist möglich, benutzerdefinierte Docking-Bedienfelder wie das Komponentenpalettenfenster, das Objektinspektorfenster usw. hinzuzufügen.
Interaktion mit dem Code-Editor Es werden Schnittstellen angeboten, um den Delphi IDE-Code-Editor programmgesteuert zu bearbeiten. Zum Beispiel, um Codeausschnitte einzufügen, Text zu ersetzen, spezielle Tastenfolgen zu verarbeiten, benutzerdefinierte Syntax- Textmarker hinzuzufügen und vieles mehr…
Interaktion mit Code Insight Code Insight im Editor kann ebenfalls angepasst werden und bietet benutzerdefinierte Hilfetexte zu bestimmten Konstrukten im Code.
Interaktion mit dem Projektmanager Mit der IDE können Sie benutzerdefinierte Kontextmenüs für Projekte und Dateien im IDE- Projektmanager-Toolbedienfeld einrichten.
Hinzufügen von benutzerdefinierten Assistenten und Elementen zum Repository Es ist möglich, benutzerdefinierte Elemente hinzuzufügen oder benutzerdefinierte Assistenten über Elemente zu starten, die dem Delphi-Repository hinzugefügt wurden . Mit diesen Assistenten können neue Projekttypen, bestimmte Formulartypen oder Datenmodule erstellt werden.
Interaktion mit ToDo-Elementen Eine API ist auch verfügbar, um mit ToDo-Elementen im Code einer Delphi IDE- Erweiterung zu interagieren .
Interagieren Sie mit dem Debugger, erstellen Sie benutzerdefinierte Debugger-Visualisierer. In neueren Delphi-Versionen kann eine IDE-Erweiterung hinzugefügt werden, die beim Debuggen eine benutzerdefinierte Anzeige eines bestimmten Datentyps bietet .
Interaktion mit dem Formular-Designer Über ein Delphi-Plugin steht eine API zur Verfügung, mit der Sie auch mit dem Formular-Designer interagieren können.
Begrüßungsbildschirmbenachrichtigungen Es wird eine Schnittstelle bereitgestellt, über die beim Start der IDE benutzerdefinierter Text auf dem Begrüßungsbildschirm hinzugefügt werden kann .
Welche Tool-Erweiterungsfunktionen stehen für WPF .NET Framework zur Verfügung?
Visual Studio, die native WPF-IDE, kann auf verschiedene Arten und in mehreren Sprachen erweitert werden. Makros werden in Visual Basic geschrieben, Add-Ins werden in .NET geschrieben und Pakete können in .NET, C #, C ++ oder Visual Basic geschrieben werden. Da WPF in XAML geschrieben ist und mit einem logischen C # -Back-End verknüpft ist, verfügen Unternehmen möglicherweise nicht über interne Erfahrung, um Tools zu erstellen, die sie zur Verbesserung ihrer Entwicklungsumgebungen benötigen, ohne die Arbeit auszulagern oder in Schulungen zu investieren.
Laut Microsoft ermöglicht Visual Studio das Erweitern von „Menüs, Symbolleisten, Befehlen, Fenstern, Lösungen, Projekten, Editoren usw.“. Darüber hinaus werden die folgenden allgemeinen Elemente aufgelistet, die erweitert werden können.
Erweitern von Menüs und Befehlen
Tool Windows erweitern und anpassen
Editor- und Sprachdiensterweiterungen
Projekte erweitern
Erweitern der Benutzereinstellungen und -optionen
Erweitern von Eigenschaften und des Eigenschaftenfensters
Erweitern anderer Teile von Visual Studio
Isolierte Shell von Visual Studio
Weitere Informationen zum Erweitern von Visual Studio finden Sie hier: https://docs.microsoft.com/en-us/visualstudio/extensibility/starting-to-develop-visual-studio-extensions?view=vs-2019
Welche Tool-Erweiterungsfunktionen stehen für Electron zur Verfügung?
Electron fehlt eine native IDE, kann jedoch Plug-Ins verwenden, die in IDEs wie Visual Studio Code verfügbar sind. Zusätzliche Electron-Tools müssen möglicherweise von Grund auf neu entwickelt oder in ein Tool eines Drittanbieters wie Visual Studio Code integriert werden. Es gibt eine große Anzahl von Open Source-Projekten rund um Tools und Funktionen für Electron.
Ein beliebter Editor für Electron ist Visual Studio Code. Andere beliebte Editoren sind Atom, Sublime Text, NotePad ++ und andere Texteditoren. Viele dieser Texteditoren, einschließlich VS Code, unterstützen Erweiterungen, aber jede ist einzigartig unterschiedlich und daher sind die Erweiterungen für Electron verstreut und von unterschiedlicher Qualität.
Einige dieser Tools umfassen:
Elektronenbauer
Elektronenschnipsel
Electron Build Tools
Abschließend haben wir uns die Funktionen zur Werkzeugerweiterung in Delphi, WPF .NET Framework und Electron Tooling angesehen. Delphi bietet die umfassendsten Tool-Erweiterungsfunktionen mit einer signifikanten langfristigen Historie hinter den vorhandenen Tools. Es kann schwierig sein, Tool-Erweiterungen für WPF .NET Framework zu erstellen, da möglicherweise keine internen Erfahrungen zum Erstellen von Tools verfügbar sind. Da WPF .NET Framework laut Microsoft ein Legacy-Framework ist, möchten Unternehmen möglicherweise kein Budget für die Unterstützung bereitstellen. Electron ist nur ein Framework und verfügt daher nicht über das gleiche Tool-Erweiterungssystem, das eine integrierte IDE wie Delphi / RAD Studio und Visual Studio bietet. Die Texteditoren, die Electron unterstützen, haben jeweils ihre eigenen Plugin-Systeme. Insgesamt bietet Delphi / RAD Studio das umfangreichste Tool zur Erweiterung von Tools.
Sind Sie bereit, alle Messdaten im Whitepaper „Ermitteln des besten Entwickler-Frameworks durch Benchmarking“ zu untersuchen?
¿Cómo funcionan Delphi, WPF .NET Framework y Electron en comparación entre sí, y cuál es la mejor manera de hacer una comparación objetiva? Embarcadero encargó un documento técnico para investigar las diferencias entre Delphi, WPF .NET Framework y Electron para crear aplicaciones de escritorio de Windows. La aplicación de referencia, un clon de la Calculadora de Windows 10, fue recreada en cada marco por tres voluntarios de Delphi Most Valuable Professionals (MVP), un desarrollador experto independiente de WPF y un desarrollador experto independiente Electron. En esta publicación de blog, vamos a explorar la métrica de extensión de herramientas, que es parte de la comparación de funciones utilizada en el documento técnico.
¿Cómo se pueden comparar las extensiones de herramientas?
¿Se puede ampliar el marco en su propio idioma? Los marcos que requieren que los complementos, extensiones o modificaciones se escriban en un idioma diferente imponen costos a las empresas que requieren una funcionalidad alterada. En lugar de crear la herramienta necesaria a partir del conocimiento de los residentes, es posible que las empresas deban invertir tiempo y recursos para contratar a un contratista externo o desarrollar habilidades internas en ese idioma alternativo.
Delphi se envía con software de prueba y también brinda a las empresas la oportunidad de desarrollar herramientas y extensiones para el marco utilizando el mismo talento que construye su producto (el IDE de Delphi está programado en Delphi). WPF ofrece bibliotecas de prueba a través de Visual Studio, y las empresas pueden disfrutar del gran entorno de extensiones y herramientas de terceros, pero es posible que necesiten subcontratar el trabajo para crear sus propias extensiones o invertir en talento para idiomas que no son de WPF. Electron carece de un IDE nativo, lo que brinda a las empresas una opción, pero también elimina algunas comodidades como la compilación integrada y las bibliotecas de prueba incluidas. Las empresas que desarrollan herramientas internas tendrían más dificultades con Electron que con los otros marcos.
Echemos un vistazo a cada marco.
¿Qué capacidades de extensión de herramientas están disponibles para Delphi?
El IDE de RAD Studio para Delphi está escrito en Delphi. Los usuarios pueden crear sus propias extensiones y herramientas en Delphi, eliminando la necesidad de aprender un nuevo idioma y manejar los problemas de los límites del idioma. Además, se pueden crear extensiones y herramientas en C ++ a través del lado C ++ Builder de RAD Studio.
RAD Studio tiene una potente API que le permite ampliar o modificar el comportamiento del IDE. Cree un paquete o complemento DLL que agregue nuevas ventanas de herramientas, dibuje en el editor de código, proporcione finalización de código, agregue nuevos tipos de proyectos, tipos de archivos y resaltado, se enganche en eventos de alto y bajo nivel, rastree procesos y subprocesos durante la depuración , y más.
Existe un rico ecosistema de complementos de código abierto y cerrado. Hay varios complementos disponibles directamente a través de Embarcadero GetIt en el IDE .
¿Cuál es la arquitectura básica de la API IDE de Delphi?
La API se basa en gran medida en interfaces. Las interfaces suelen comenzar con el prefijo IOTA o INTA. El IDE expone una gran cantidad de interfaces que se pueden llamar desde el complemento; a la inversa, el propio IDE también puede llamar al código del complemento cuando se activa una acción específica en el IDE. Para informar al IDE que el complemento tiene un controlador para estas acciones, en la mayoría de los casos, esto se hace escribiendo una clase que desciende de TNotifierObject que implementa una interfaz y registra la clase con el IDE. Como escritor de complementos, se encontrará principalmente escribiendo código que llama a las interfaces IDE y escribiendo clases que implementan interfaces que serán llamadas desde el IDE.
¿Qué áreas de RAD Studio IDE se pueden ampliar?
El IDE de Delphi se puede ampliar de muchas formas. Esta es una breve descripción general de las áreas más comunes de extensión del IDE:
Crear y agregar paneles de acoplamiento personalizados Es posible agregar paneles de acoplamiento personalizados como el panel de paleta de componentes, el panel del inspector de objetos, etc.
Interactuar con el editor de código Se ofrecen interfaces para manipular mediante programación el editor de código IDE de Delphi; por ejemplo, para insertar fragmentos de código, reemplazar texto, manejar secuencias de teclas especiales, agregar resaltadores de sintaxis personalizados y más …
Interactuar con Code Insight Code Insight en el editor también se puede personalizar, ofreciendo textos de ayuda personalizados sobre construcciones específicas en el código.
Interactuar con el Administrador de proyectos El IDE le permite tener menús contextuales personalizados para proyectos y archivos en el panel de herramientas del Administrador de proyectos de IDE .
Agregar asistentes personalizados, elementos al repositorio Es posible agregar elementos personalizados o iniciar asistentes personalizados desde los elementos agregados al repositorio de Delphi. A partir de estos asistentes, se pueden crear nuevos tipos de proyectos, tipos de formularios específicos o módulos de datos.
Interactuar con elementos de Tareas Una API también está disponible para interactuar con elementos de Tareas en el código de una extensión IDE de Delphi .
Interactúe con el depurador, cree visualizadores de depurador personalizados En las versiones más recientes de Delphi, se puede agregar una extensión IDE que proporciona una visualización personalizada de un tipo de datos específico durante la depuración.
Interactuar con el diseñador de formularios Desde un complemento de Delphi, también está disponible una API para interactuar con el diseñador de formularios.
Notificaciones de la pantalla de bienvenida Se proporciona una interfaz para agregar texto personalizado en la pantalla de bienvenida durante el inicio del IDE.
¿Qué capacidades de extensión de herramientas están disponibles para WPF .NET Framework?
Visual Studio, el IDE nativo de WPF, se puede ampliar de varias formas y en varios idiomas. Las macros se escriben en Visual Basic, los complementos se escriben en .NET y los paquetes se pueden escribir en .NET, C #, C ++ o Visual Basic. Debido a que WPF está escrito en XAML y se vincula con un back-end lógico de C #, es posible que las empresas no tengan experiencia interna para crear las herramientas que necesitan para mejorar sus entornos de desarrollo sin subcontratar el trabajo o invertir en capacitación.
Según Microsoft, Visual Studio permite ampliar “menús, barras de herramientas, comandos, ventanas, soluciones, proyectos, editores, etc.” Además, enumera los siguientes elementos comunes que se pueden ampliar.
Ampliación de menús y comandos
Ampliación y personalización de ventanas de herramientas
Extensiones de servicio de idiomas y editor
Ampliación de proyectos
Ampliación de la configuración y las opciones del usuario
Ampliación de propiedades y la ventana de propiedades
Ampliación de otras partes de Visual Studio
Shell aislado de Visual Studio
Obtenga más información sobre la ampliación de Visual Studio aquí: https://docs.microsoft.com/en-us/visualstudio/extensibility/starting-to-develop-visual-studio-extensions?view=vs-2019
¿Qué capacidades de extensión de herramientas están disponibles para Electron?
Electron carece de un IDE nativo, pero puede usar complementos disponibles en IDE como Visual Studio Code. Es posible que las herramientas Electron adicionales deban desarrollarse internamente desde cero o integrarse con una herramienta de terceros como Visual Studio Code. Hay una gran cantidad de proyectos de código abierto en torno a las herramientas y la funcionalidad de Electron.
Un editor popular utilizado con Electron es Visual Studio Code. Otros editores populares son Atom, Sublime Text, NotePad ++ y otros editores de texto. Muchos de estos editores de texto, incluidas las extensiones de soporte de VS Code, pero cada uno es singularmente diferente y, por lo tanto, las extensiones para Electron están dispersas y de diferente calidad.
Algunas de estas herramientas incluyen:
Generador de electrones
Fragmentos de electrones
Herramientas de construcción de electrones
En conclusión, hemos analizado las capacidades de extensión de herramientas en herramientas Delphi, WPF .NET Framework y Electron. Delphi proporciona las capacidades de extensión de herramientas más amplias con un historial significativo a largo plazo detrás de las herramientas existentes. Puede resultar difícil crear extensiones de herramientas para WPF .NET Framework, ya que es posible que la experiencia interna para crear herramientas no esté disponible. Además, como WPF .NET Framework es un marco heredado, según Microsoft, es posible que las empresas no deseen asignar presupuesto para respaldarlo. Electron es solo un marco y, por lo tanto, no tiene el mismo sistema de extensión de herramientas que proporcionan un IDE integrado como Delphi / RAD Studio y Visual Studio. Los editores de texto que admiten Electron tienen cada uno sus propios sistemas de complementos únicos. En general, Delphi / RAD Studio proporciona el ecosistema de extensión de herramientas más rico.
¿Está listo para explorar todas las métricas del documento técnico “Descubriendo el mejor marco para desarrolladores a través de la evaluación comparativa”?
Qual é o desempenho do Delphi, do WPF .NET Framework e do Electron em comparação entre si, e qual é a melhor maneira de fazer uma comparação objetiva? A Embarcadero encomendou um white paper para investigar as diferenças entre Delphi, WPF .NET Framework e Electron para a construção de aplicativos de desktop do Windows. O aplicativo de benchmark – um clone da Calculadora do Windows 10 – foi recriado em cada estrutura por três voluntários Delphi Most Valuable Professionals (MVPs), um desenvolvedor WPF freelance especializado e um desenvolvedor freelance especializado em Electron. Nesta postagem do blog, vamos explorar a métrica Extensão da ferramenta, que faz parte da comparação de funcionalidade usada no white paper.
Como as extensões de ferramentas podem ser comparadas?
O framework pode ser estendido em sua própria linguagem? Frameworks que requerem que plug-ins, extensões ou modificações sejam escritos em um idioma diferente impõem custos aos negócios que requerem funcionalidade alterada. Em vez de criar a ferramenta necessária com base no conhecimento residente, as empresas podem ter que investir tempo e recursos para contratar um contratante externo ou desenvolver habilidades internas nesse idioma alternativo.
O Delphi vem com software de teste e também dá às empresas a oportunidade de desenvolver ferramentas e extensões para o framework usando o mesmo talento que constrói seu produto (o IDE Delphi é programado em Delphi). O WPF oferece bibliotecas de teste por meio do Visual Studio, e as empresas podem aproveitar a grande ferramenta de terceiros e o ambiente de extensão, mas podem precisar terceirizar o trabalho para construir suas próprias extensões ou investir em talentos para linguagens não WPF. Electron carece de um IDE nativo, dando às empresas uma escolha, mas também removendo algumas conveniências como compilação integrada e bibliotecas de teste incluídas. As empresas que desenvolvem ferramentas internas teriam mais dificuldade com o Electron do que com outras estruturas.
Vamos dar uma olhada em cada estrutura.
Quais recursos de extensão de ferramenta estão disponíveis para Delphi?
O RAD Studio IDE para Delphi é escrito em Delphi. Os usuários podem construir suas próprias extensões e ferramentas em Delphi, eliminando a necessidade de aprender uma nova linguagem e lidar com problemas de limite de linguagem. Além disso, extensões e ferramentas podem ser construídas em C ++ por meio do lado C ++ Builder do RAD Studio.
O RAD Studio possui uma API poderosa que permite estender ou modificar o comportamento do IDE. Crie um pacote ou plugin DLL que adiciona novas janelas de ferramentas, desenha no editor de código, fornece autocompletar código, adiciona novos tipos de projeto, tipos de arquivo e realce, conecta em eventos de alto e baixo nível, rastreia processos e threads durante a depuração , e mais.
Existe um rico ecossistema de add-ons de código aberto e fechado. Existem vários add-ons disponíveis diretamente via Embarcadero GetIt no IDE .
A API é fortemente baseada em interfaces. As interfaces geralmente começam com o prefixo IOTA ou INTA. O IDE expõe muitas interfaces que podem ser chamadas a partir do plug-in; inversamente, o próprio IDE também pode chamar o código do plug-in quando uma ação específica é disparada no IDE. Para informar ao IDE que o plug-in possui um manipulador para essas ações, na maioria dos casos, isso é feito escrevendo uma classe descendente de TNotifierObject que implementa uma interface e registra a classe com o IDE. Como um escritor de plug-in, você se verá escrevendo principalmente código que chama as interfaces IDE e classes que implementam interfaces que serão chamadas a partir do IDE.
Quais áreas do RAD Studio IDE podem ser estendidas?
O IDE Delphi pode ser estendido de várias maneiras. Esta é uma breve visão geral das áreas mais comuns de extensão do IDE:
Criar e adicionar painéis de encaixe personalizados É possível adicionar painéis de encaixe personalizados, como o painel da paleta de componentes, o painel do inspetor de objetos, etc.
Interaja com o editor de código As interfaces são oferecidas para manipular programaticamente o editor de código Delphi IDE; por exemplo, para inserir trechos de código, substituir texto, lidar com sequências de teclas especiais, adicionar realçadores de sintaxe personalizados e muito mais …
Interaja com o Code Insight O Code Insight no editor também pode ser personalizado, oferecendo textos de ajuda personalizados sobre construções específicas no código.
Interaja com o gerenciador de projetos O IDE permite que você tenha menus de contexto personalizados para projetos e arquivos no painel de ferramentas Gerenciador de projetos do IDE .
Adicionar assistentes e itens personalizados ao repositório É possível adicionar itens personalizados ou iniciar assistentes personalizados a partir de itens adicionados ao repositório Delphi. A partir desses assistentes, novos tipos de projeto, tipos de formulário específicos ou módulos de dados podem ser criados.
Interaja com itens ToDo Uma API também está disponível para interagir com itens ToDo no código de uma extensão IDE do Delphi .
Interaja com o depurador, crie visualizadores de depurador personalizados Em versões mais recentes do Delphi, uma extensão IDE pode ser adicionada para fornecer uma exibição personalizada de um tipo de dados específico durante a depuração.
Interaja com o designer de formulário A partir de um plugin Delphi, uma API está disponível para interagir com o designer de formulário também.
Notificações da tela inicial Uma interface é fornecida para adicionar texto personalizado na tela inicial durante a inicialização do IDE.
Quais recursos de extensão de ferramenta estão disponíveis para WPF .NET Framework?
O Visual Studio, o IDE nativo do WPF, pode ser estendido de várias maneiras e em vários idiomas. As macros são escritas em Visual Basic, os suplementos são escritos em .NET e os pacotes podem ser escritos em .NET, C #, C ++ ou Visual Basic. Como o WPF é escrito em XAML e está vinculado a um back-end lógico C #, as empresas podem não ter experiência interna para criar as ferramentas de que precisam para aprimorar seus ambientes de desenvolvimento sem terceirizar o trabalho ou investir em treinamento.
De acordo com a Microsoft, o Visual Studio permite estender “menus, barras de ferramentas, comandos, janelas, soluções, projetos, editores e assim por diante”. Além disso, ele lista os seguintes itens comuns que podem ser estendidos.
Estendendo Menus e Comandos
Ampliando e personalizando janelas de ferramentas
Editor e extensões de serviço de linguagem
Projetos de extensão
Estendendo as configurações e opções do usuário
Estendendo Propriedades e a Janela de Propriedades
Estendendo outras partes do Visual Studio
Shell Isolado do Visual Studio
Saiba mais sobre como estender o Visual Studio aqui: https://docs.microsoft.com/en-us/visualstudio/extensibility/starting-to-develop-visual-studio-extensions?view=vs-2019
Quais recursos de extensão de ferramenta estão disponíveis para o Electron?
Electron não tem um IDE nativo, mas pode usar plug-ins disponíveis em IDEs como o Visual Studio Code. Ferramentas adicionais do Electron podem ter que ser desenvolvidas internamente a partir do zero ou integradas a uma ferramenta de terceiros, como o Visual Studio Code. Há um grande número de projetos de código aberto em torno de ferramentas e funcionalidades para Electron.
Um editor popular usado com Electron é o Visual Studio Code. Outros editores populares são Atom, Sublime Text, NotePad ++ e outros editores de texto. Muitos desses editores de texto, incluindo extensões de suporte do VS Code, mas cada um é exclusivamente diferente e, portanto, as extensões para Electron estão espalhadas e de qualidade variável.
Algumas dessas ferramentas incluem:
Construtor de elétrons
Fragmentos de elétrons
Ferramentas de construção de elétrons
Concluindo, examinamos os recursos de extensão de ferramentas em Delphi, WPF .NET Framework e ferramentas Electron. A Delphi fornece os mais amplos recursos de extensão de ferramenta com um histórico significativo de longo prazo por trás das ferramentas existentes. Pode ser difícil construir extensões de ferramenta para WPF .NET Framework, pois a experiência interna para construir ferramentas pode não estar disponível. Além disso, como o WPF .NET Framework é uma estrutura legada, de acordo com a Microsoft, as empresas podem não querer alocar orçamento para apoiá-la. Electron é apenas um framework e, portanto, não tem o mesmo sistema de extensão de ferramenta que um IDE integrado como Delphi / RAD Studio e Visual Studio fornecem. Cada um dos editores de texto que oferecem suporte ao Electron possui seus próprios sistemas de plug-ins exclusivos. Em geral, o Delphi / RAD Studio fornece o ecossistema de extensão de ferramentas mais rico.
Pronto para explorar todas as métricas do white paper “Descobrindo a melhor estrutura de desenvolvedor por meio de benchmarking”?
Как работают Delphi, WPF .NET Framework и Electron по сравнению друг с другом и как лучше всего провести объективное сравнение? Embarcadero заказал технический документ для исследования различий между Delphi, WPF .NET Framework и Electron для создания настольных приложений Windows. Тестовое приложение — клон калькулятора Windows 10 — было воссоздано в каждой структуре тремя волонтерами Delphi Most Valuable Professionals (MVP), одним экспертом-фрилансером WPF-разработчиком и одним экспертом-фрилансером Electron. В этом сообщении блога мы собираемся изучить метрику Tool Extension, которая является частью сравнения функциональности, используемого в техническом документе.
Как можно сравнить расширения инструментов?
Можно ли расширить фреймворк на его родном языке? Платформы, требующие написания подключаемых модулей, расширений или модификаций на другом языке, накладывают расходы на компании, которым требуется изменение функциональности. Вместо того, чтобы создавать необходимый инструмент из местных знаний, компаниям, возможно, придется инвестировать время и ресурсы, чтобы нанять внешнего подрядчика или развить у сотрудников навыки владения этим альтернативным языком.
Delphi поставляется с программным обеспечением для тестирования, а также дает компаниям возможность разрабатывать инструменты и расширения для платформы, используя тот же талант, который создает их продукт (среда Delphi IDE запрограммирована на Delphi). WPF предлагает библиотеки тестирования через Visual Studio, и предприятия могут пользоваться большим сторонним инструментом и средой расширений, но, возможно, потребуется передать работу на аутсорсинг для создания собственных расширений или инвестировать в талантливые специалисты для языков, отличных от WPF. В Electron отсутствует собственная среда IDE, что дает предприятиям выбор, но при этом устраняет некоторые удобства, такие как встроенная компиляция и включенные библиотеки тестирования. Компаниям, разрабатывающим собственные инструменты, будет сложнее использовать Electron, чем другие фреймворки.
Давайте посмотрим на каждый фреймворк.
Какие возможности расширения инструмента доступны для Delphi?
IDE RAD Studio для Delphi написана на Delphi. Пользователи могут создавать свои собственные расширения и инструменты в Delphi, избавляя от необходимости изучать новый язык и решать проблемы языковых границ. Кроме того, расширения и инструменты могут быть созданы на C ++ через компонент C ++ Builder RAD Studio.
RAD Studio имеет мощный API, позволяющий расширять или изменять поведение IDE. Создайте пакет или подключаемый модуль DLL, который добавляет новые окна инструментов, рисует в редакторе кода, обеспечивает завершение кода, добавляет новые типы проектов, типы файлов и выделение, подключает к высокоуровневым и низкоуровневым событиям, отслеживает процессы и потоки во время отладки , и больше.
Существует богатая экосистема надстроек как с открытым, так и с закрытым исходным кодом. Существует ряд надстроек, доступных непосредственно через Embarcadero GetIt в среде IDE .
Ниже приводится отрывок из технического документа « Расширение среды IDE Delphi » Бруно Фиренса и Эмбаркадеро.
Какова базовая архитектура API Delphi IDE?
API в значительной степени основан на интерфейсах. Интерфейсы обычно начинаются с префикса IOTA или INTA. IDE предоставляет множество интерфейсов, которые можно вызывать из плагина; и наоборот, сама среда IDE также может вызывать код из подключаемого модуля, когда в среде IDE запускается определенное действие. Чтобы сообщить IDE, что у плагина есть обработчик для этих действий, в большинстве случаев это делается путем написания класса, производного от TNotifierObject, который реализует интерфейс и регистрирует класс в IDE. Как писатель плагинов, вы будете в основном писать код, который вызывает интерфейсы IDE, и писать классы, реализующие интерфейсы, которые будут вызываться из IDE.
Какие области RAD Studio IDE можно расширить?
IDE Delphi может быть расширена многими способами. Это краткий обзор наиболее распространенных областей расширения IDE:
Создание и добавление настраиваемых закрепляемых панелей. Можно добавлять настраиваемые закрепляемые панели, такие как панель палитры компонентов, панель инспектора объектов и т. Д.
Взаимодействие с редактором кода Предлагаются интерфейсы для программного управления редактором кода IDE Delphi; например, чтобы вставлять фрагменты кода, заменять текст, обрабатывать специальные последовательности клавиш, добавлять собственные подсветки синтаксиса и многое другое…
Взаимодействие с Code Insight Code Insight в редакторе также можно настроить, предлагая настраиваемые тексты справки по определенным конструкциям в коде.
Взаимодействие с диспетчером проектов Среда IDE позволяет создавать настраиваемые контекстные меню для проектов и файлов на панели инструментов диспетчера проектов IDE .
Добавление настраиваемых мастеров, элементов в репозиторий. Можно добавлять настраиваемые элементы или запускать настраиваемые мастера из элементов, добавленных в репозиторий Delphi. С помощью этих мастеров можно создавать новые типы проектов, определенные типы форм или модули данных.
Взаимодействие с элементами ToDo Также доступен API для взаимодействия с элементами ToDo в коде из расширения Delphi IDE .
Взаимодействие с отладчиком, создание настраиваемых визуализаторов отладчика. В более новых версиях Delphi можно добавить расширение IDE, которое обеспечивает настраиваемое отображение определенного типа данных во время отладки.
Взаимодействие с дизайнером форм Из плагина Delphi доступен API для взаимодействия с дизайнером форм.
Уведомления на заставке. Предоставляется интерфейс для добавления настраиваемого текста на заставку во время запуска IDE.
Какие возможности расширения инструмента доступны для WPF .NET Framework?
Visual Studio, родную WPF IDE, можно расширить несколькими способами и на нескольких языках. Макросы написаны на Visual Basic, надстройки написаны на .NET, а пакеты могут быть написаны на .NET, C #, C ++ или Visual Basic. Поскольку WPF написан на XAML и связан с логической серверной частью C #, компании могут не иметь собственного опыта для создания инструментов, необходимых для улучшения их сред разработки, без передачи работы сторонним организациям или инвестиций в обучение.
Согласно Microsoft, Visual Studio позволяет расширять «меню, панели инструментов, команды, окна, решения, проекты, редакторы и так далее». Кроме того, в нем перечислены следующие общие элементы, которые можно расширить.
Расширение меню и команд
Расширение и настройка окон инструментов
Редактор и расширения языковой службы
Расширение проектов
Расширение пользовательских настроек и параметров
Расширение свойств и окна свойств
Расширение других частей Visual Studio
Изолированная оболочка Visual Studio
Узнайте больше о расширении Visual Studio здесь: https://docs.microsoft.com/en-us/visualstudio/extensibility/starting-to-develop-visual-studio-extensions?view=vs-2019
Какие возможности расширения инструмента доступны для Electron?
У Electron нет собственной IDE, но он может использовать плагины, доступные в IDE, такие как Visual Studio Code. Дополнительные инструменты Electron, возможно, придется разрабатывать собственными силами с нуля или интегрировать со сторонними инструментами, такими как Visual Studio Code. Существует большое количество проектов с открытым исходным кодом, касающихся инструментов и функций для Electron.
Популярным редактором, используемым с Electron, является Visual Studio Code. Другими популярными редакторами являются Atom, Sublime Text, NotePad ++ и другие текстовые редакторы. Многие из этих текстовых редакторов включают расширения поддержки VS Code, но каждый из них уникален, и поэтому расширения для Electron разбросаны по всему миру и разного качества.
Некоторые из этих инструментов включают:
Электронный строитель
Электронные сниппеты
Электронные инструменты сборки
В заключение мы рассмотрели возможности расширения инструментов в инструментах Delphi, WPF .NET Framework и Electron. Delphi предоставляет широчайшие возможности расширения инструментов со значительной долгой историей существующих инструментов. Создание расширений инструментов для WPF .NET Framework может быть затруднительным, поскольку собственный опыт создания инструментов может быть недоступен. Кроме того, поскольку WPF .NET Framework является устаревшей платформой, по мнению Microsoft, компании могут не захотеть выделять бюджет на ее поддержку. Electron — это всего лишь фреймворк, и поэтому он не имеет той же системы расширений инструментов, которую предоставляют интегрированные IDE, такие как Delphi / RAD Studio и Visual Studio. У текстовых редакторов, которые действительно поддерживают Electron, есть свои собственные уникальные системы плагинов. В целом Delphi / RAD Studio предоставляет богатейшую экосистему расширений инструментов.
Готовы изучить все показатели в техническом документе «Обнаружение лучшей среды разработки с помощью сравнительного анализа»?
We have more post picks for you from the LearnCPlusPlus.org web site, here we listed some of interesting posts from February. If you are beginner or want to jump in to C++ Builder please visit our LearnCPlusPlus.org web site for the great posts from basics to professional examples, full codes, snippets, etc.
Do you want to learn how to import 3D objects as a Model3D in to Viewport in C++ Builder ? We had a simple Perseverance simulation example in C++ Builder. Do you want to learn or remember how to sort with bubble sort, quick sort, or merge sort, or sorting vectors in C++ ? Examples are given in picks below. Want to learn Unit Tests in C++ and want to learn about performance & optimization of C++ codes ?
Sometimes Developers Want to list or Identify USB Devices History in a machine programmatically. Don’t know how to do. Don’t worry. MiTec’s System Information Management Suite’s component helps to enumerate the USB Devices History we will learn how to use use the TMiTec_USBHistory component in this blog post.
Platforms: Windows.
Installation Steps:
You can easily install this Component Suite from GetIt Package Manager. The steps are as follows.
Navigate In RAD Studio IDE->Tools->GetIt Package Manager->select Components in Categories->Components->Trail -MiTec system Information Component Suite 14.3 and click Install Button.
Read the license and Click Agree All. An Information dialog saying ‘Requires a restart of RAD studio at the end of the process. Do you want to proceed? click yes and continue.
It will download the plugin and installs it. Once installed Click Restart now.
How to run the Demo app:
Navigate to the System Information Management Suite trails setup, Demos folder which is installed during Get It installation e.g) C:UsersDocumentsEmbarcaderoStudio21.0CatalogRepositoryMiTeC-14.3DemosDelphi25
Open the USBHistory project in RAD studio 10.4.1 compile and Run the application.
This Demo App shows how to list down the USB devices history in your machine, enumerate among them and access its key properties.
Components used in MSIC USBHistory Demo App:
TMiTeC_USBHistory: Enumerates all USB devices History and their properties.
TListView to show the connected USB Device History properties.
TButton’s to save, and close the application.
Implementation Details:
An instance is created USBHistory of TMiTeC_USBHistory. Call USBHistory.RefreshData, Add list of USB History to the TListView by loop through USBHistory.RecordCount. For each USBHistory Record get the properties such as Device Name, SerialNumber, Timestamp(lastseen), DeviceClass etc and add to list view subitems.
procedure TForm3.FormCreate(Sender: TObject);
var
i: Integer;
begin
USBHistory:=TMiTeC_USBHistory.Create(Self);
USBHistory.RefreshData;
for i:=0 to USBHistory.RecordCount-1 do
with lv.Items.Add do begin
Caption:=USBHistory.Records[i].Name;
Subitems.Add(USBHistory.Records[i].SerialNumber);
SubItems.Add(DateTimeToStr(USBHistory.Records[i].Timestamp));
Subitems.Add(USBHistory.Records[i].DeviceClass);
end;
lv.AlphaSort;
end;
Display the selected USB Device History properties as shown below.
MiTeC_USBHistory Demo
It’s that simple to enumerate USB Devices History and list its properties for your application. Use this MiTeC component suite and get the job done quickly.
C++ has a wide ecosystem. One of our key goals with C++Builder is to ensure you can take advantage of the libraries other C++ developers write. With each release we’ve been working on the RTL and STL to ensure it is of a high quality and has great compatibility – exactly what you need if you’re upgrading projects, or you want to pull in C++ source code from a library online.
One great demonstration of this is the increasing number of (often complex) open source C++ libraries we’re making available in GetIt, our package manager.
10.4.2 has five new libraries available, making up fifteen open-source libraries in total – steadily increasing with each release. And these are amazing libraries.
Microsoft C++ Core Guidelines Support Library
Many open source C++ libraries are available in 10.4.2! Click to expand.
The C++ standards committee maintains its recommendation for how to use modern C++ effectively – the core guidelines. This library, written by Microsoft, contains a set of types and methods that help you write C++ code using those guidelines. It includes items like span, based on std::span but with bounds checking; not_null, forcing a smart pointer to never hold null values; precondition and postcondition assertions (expects and ensures); stack and heap arrays; and much more – these are just a few that I personally find useful.
You can read more about the useful methods and types in GSL readme… and now you can use these in C++Builder!
Google Test
We are often asked about test frameworks for C++Builder. We recommend DUnit, which supports C++, and Boost also includes Boost::test. However, Google Test is very well-known and includes the Google Mocks framework for object mocking.
This is a complex library, and not only is it useful for you to have in GetIt, including it is a clear demonstration of the quality of the 10.4.2 release.
xtl
Xtl contains many useful containers and algorithms used by the xtensor framework (one we’re working on) often used in finance – in fact, it is part of the xtensor quant stack. If you’re looking for high-performance C++, this is a great start.
ACE/TAO: cross-platform CORBA messaging library
ACE/TAO is one of the largest and most complex libraries — and also one that many customers ask us about. Over the past year, we’ve done a significant amount of work focusing specifically on this library. Its inclusion is a clear demonstration of the compatibility that 10.4.2 gives you. We also expect that many C++Builder users will want to use ACE/TAO, perhaps to upgrade projects from several versions ago. We’re very happy to have it on GetIt!
{fmt} Safe and very fast formatting for C++
C standard IO and C++ streams are famous for being difficult to use and often unsafe. {fmt} is a very popular alternative with elegant syntax, compile-time errors, strong testing, and excellent performance. Here are some code snippets taken from their readme:
std::vector<int> v = {1, 2, 3};
fmt::print("{}n", v);
which prints:
{1, 2, 3}
Or an example of passing the wrong type, which may have bitten you in your code before:
std::string s = fmt::format(FMT_STRING("{:d}"), "I am not a number");
This gives a compile-time error. Check out the readme here: it has impressive code samples and benchmarks. This library is pending some minor final work, but is coming soon for you to use in C++Builder!
These key, useful open source libraries give immense value to your projects. I personally am especially excited to see ACE/TAO (often requested), the Guidelines Support Library, and {fmt}. Remember that you too can add any open source C++ library to your code with C++Builder: we’ve worked hard on compatibility and quality to make sure you can use whatever code you need.
Of course, that’s not all! GetIt also includes Boost (classic, Win32 clang, Win64 clang), EasyBMP, Eigen linear algebra and math framework, the Expat and TinyXML XML parsers, libsimdpp (fast math), NemaTode (NMEA and GPS), SMHasher (hash functions) and SDL2 (great for writing games!)
C++Builder has had steady work on compatibility and robustness to ensure your code works well when you upgrade, and that you can use external C++ code easily – a great benefit for your software. 10.4.2 shows the work well, with the addition of some both really useful and technically complex libraries that demonstrate the improvements this release. We hope the libraries will be beneficial to your projects! And even apart from these libraries, upgrade to 10.4.2 to make use of the improved quality for your software, as well as some of the other improvements this release – linker memory, code completion, and more.
Native applications feel really good. You know what I am saying! They look nice and work better. And no flickering or eating lots of memory. With Delphi and C++ Builder you can build cross-platform native applications easily in no time.
FireMonkey offers full access to the platform-specific APIs and it is easy to implement any feature by getting the full native performance. However, each platform has its hundreds of libraries or components and you do not get all the platform-specific components in the palette.
Nevertheless, third-party components are available and one of them is the TMS mCL. This is a set of components for true native macOS application development.
What do you get from the TMS mCL?
TTMSFMXNativeNSOutlineView
TTMSFMXNativeMaciCloudDocument
TTMSFMXNativeMacPDFLib
TTMSFMXNativeMaciCloud
TTMSFMXNativeNSRichTextView
TMSFMXNativePDFThumbnailView
and more
For instance, the TTMSFMXNativeMaciCloud component provides:
Access to the iCloud key-value storage
Configurable automatic or manual synchronization of keys and values
Add, delete and update key events
Support for String, Integer, Boolean, Double, and TMemoryStream
Capability to synchronize settings and data between iPod, iPhone, iPad, and macOS applications
Or the TTMSFMXNativeNSRichTextView component:
Native macOS NSTextView with full rich text editing capabilities
Support for full document style and font manipulation
Support for URL, emoticons, bitmaps
Exporting options
These complex macOS components help you to make a successful project using Delphi or C++ Builder.
Hier finden Sie die Folien, Demos und Wiederholungen des Webinars Hands-On mit Delphi 10.4.2, die einen detaillierteren und detaillierteren Überblick über die neuesten Funktionen von Delphi mit Schwerpunkt auf 10.4.2 Sydney bieten.
Aquí están las diapositivas, demostraciones y reproducción del seminario web Hands-On with Delphi 10.4.2 que proporciona una visión más profunda y detallada de las características recientes de Delphi con un enfoque en 10.4.2 Sydney.
Aqui estão os slides, demonstrações e replay do webinar Hands-On with Delphi 10.4.2, fornecendo uma visão mais aprofundada e detalhada dos recursos recentes do Delphi com foco no 10.4.2 Sydney.
Вот слайды, демонстрации и повтор из практического веб-семинара с Delphi 10.4.2, дающие более глубокий и подробный обзор последних функций Delphi с акцентом на 10.4.2 Sydney.
C ++ hat ein breites Ökosystem. Eines unserer Hauptziele mit C ++ Builder ist es, sicherzustellen, dass Sie die Bibliotheken nutzen können, die andere C ++ – Entwickler schreiben. Bei jeder Version haben wir an RTL und STL gearbeitet, um sicherzustellen, dass sie von hoher Qualität sind und eine hervorragende Kompatibilität aufweisen – genau das, was Sie benötigen, wenn Sie Projekte aktualisieren oder C ++ – Quellcode aus einer Online-Bibliothek abrufen möchten .
Ein gutes Beispiel dafür ist die zunehmende Anzahl von (oft komplexen) Open Source C ++ – Bibliotheken, die wir in GetIt, unserem Paketmanager, zur Verfügung stellen.
10.4.2 verfügt über fünf neue Bibliotheken , die insgesamt fünfzehn Open-Source-Bibliotheken umfassen und mit jeder Version stetig zunehmen. Und das sind erstaunliche Bibliotheken.
Unterstützungsbibliothek für Microsoft C ++ Core Guidelines
Many open source C++ libraries are available in 10.4.2! Click to expand.
Das C ++ – Standardkomitee hält an seiner Empfehlung fest, wie modernes C ++ effektiv eingesetzt werden kann – den Kernrichtlinien . Diese von Microsoft geschriebene Bibliothek enthält eine Reihe von Typen und Methoden, mit denen Sie C ++ – Code anhand dieser Richtlinien schreiben können. Es enthält Elemente wie span, basierend auf std :: span, jedoch mit Überprüfung der Grenzen. not_null, wodurch ein intelligenter Zeiger gezwungen wird, niemals Nullwerte zu halten; Behauptungen über Vor- und Nachbedingungen (erwartet und stellt sicher); Stapel- und Heap-Arrays; und vieles mehr – dies sind nur einige, die ich persönlich nützlich finde.
Sie können mehr über die nützlichen Methoden und Typen in der GSL-Readme-Datei lesen … und jetzt können Sie diese in C ++ Builder verwenden!
Google Test
Wir werden häufig nach Testframeworks für C ++ Builder gefragt. Wir empfehlen DUnit, das C ++ unterstützt, und Boost enthält auch Boost :: test. Google Test ist jedoch sehr bekannt und enthält das Google Mocks-Framework für die Objektverspottung.
Dies ist eine komplexe Bibliothek, die nicht nur für GetIt nützlich ist, sondern auch die Qualität der Version 10.4.2 deutlich demonstriert.
xtl
Xtl enthält viele nützliche Container und Algorithmen, die vom xtensor-Framework (an dem wir gerade arbeiten) verwendet werden, das häufig im Finanzbereich verwendet wird. Tatsächlich ist es Teil des xtensor-Quant-Stacks. Wenn Sie nach leistungsstarkem C ++ suchen, ist dies ein guter Anfang.
ACE / TAO: Plattformübergreifende CORBA-Messaging-Bibliothek
ACE / TAO ist eine der größten und komplexesten Bibliotheken – und auch eine, nach der uns viele Kunden fragen. Im vergangenen Jahr haben wir eine beträchtliche Menge an Arbeit geleistet, die sich speziell auf diese Bibliothek konzentriert. Die Aufnahme ist ein klarer Beweis für die Kompatibilität, die 10.4.2 bietet. Wir erwarten auch, dass viele C ++ Builder-Benutzer ACE / TAO verwenden möchten, um möglicherweise Projekte von vor mehreren Versionen zu aktualisieren. Wir freuen uns sehr, es auf GetIt zu haben!
{fmt} Sichere und sehr schnelle Formatierung für C ++
C-Standard-E / A- und C ++ – Streams ist bekannt dafür, dass sie schwierig zu verwenden und oft unsicher sind. {fmt} ist eine sehr beliebte Alternative mit eleganter Syntax, Fehlern bei der Kompilierung, starken Tests und hervorragender Leistung. Hier sind einige Codefragmente aus ihrer Readme-Datei:
std::vector<int> v = {1, 2, 3};
fmt::print("{}n", v);
welche druckt:
{1, 2, 3}
Oder ein Beispiel für die Übergabe des falschen Typs, der Sie möglicherweise zuvor in Ihren Code gebissen hat:
std::string s = fmt::format(FMT_STRING("{:d}"), "I am not a number");
Dies führt zu einem Fehler bei der Kompilierung. Schauen Sie sich die Readme-Datei hier an : Sie enthält beeindruckende Codebeispiele und Benchmarks. Diese Bibliothek steht noch aus, aber in Kürze können Sie sie in C ++ Builder verwenden!
Diese wichtigen, nützlichen Open Source-Bibliotheken verleihen Ihren Projekten einen immensen Wert. Ich persönlich freue mich besonders über ACE / TAO (oft angefordert), die Guidelines Support Library und {fmt}. Denken Sie daran, dass auch Sie mit C ++ Builder eine Open Source C ++ – Bibliothek zu Ihrem Code hinzufügen können: Wir haben intensiv an Kompatibilität und Qualität gearbeitet, um sicherzustellen, dass Sie den von Ihnen benötigten Code verwenden können.
Das ist natürlich noch nicht alles! GetIt enthält außerdem Boost (klassisch, Win32-Clang, Win64-Clang), EasyBMP, Eigen-Linearalgebra und Mathe-Framework, die XML-Parser Expat und TinyXML, libsimdpp (schnelle Mathematik), NemaTode (NMEA und GPS), SMHasher (Hash-Funktionen) und SDL2 (ideal zum Schreiben von Spielen!)
C ++ Builder hat kontinuierlich an Kompatibilität und Robustheit gearbeitet, um sicherzustellen, dass Ihr Code beim Upgrade gut funktioniert und Sie externen C ++ – Code problemlos verwenden können – ein großer Vorteil für Ihre Software. 10.4.2 zeigt die Arbeit gut, mit einigen wirklich nützlichen und technisch komplexen Bibliotheken, die die Verbesserungen dieser Version demonstrieren. Wir hoffen, dass die Bibliotheken für Ihre Projekte von Vorteil sind! Und auch abgesehen von diesen Bibliotheken sollten Sie ein Upgrade auf 10.4.2 durchführen, um die verbesserte Qualität Ihrer Software sowie einige andere Verbesserungen dieser Version zu nutzen – Linker-Speicher, Code-Vervollständigung und mehr.
C ++ tiene un amplio ecosistema. Uno de nuestros objetivos clave con C ++ Builder es asegurarnos de que pueda aprovechar las bibliotecas que escriben otros desarrolladores de C ++. Con cada versión, hemos estado trabajando en RTL y STL para asegurarnos de que sea de alta calidad y tenga una gran compatibilidad, exactamente lo que necesita si está actualizando proyectos o si desea extraer el código fuente de C ++ de una biblioteca en línea. .
Una gran demostración de esto es el número creciente de bibliotecas C ++ de código abierto (a menudo complejas) que estamos poniendo a disposición en GetIt, nuestro administrador de paquetes.
10.4.2 tiene cinco nuevas bibliotecas disponibles , que componen quince bibliotecas de código abierto en total, aumentando constantemente con cada versión. Y estas son bibliotecas asombrosas .
Biblioteca de compatibilidad con las pautas básicas de Microsoft C ++
¡Muchas bibliotecas C ++ de código abierto están disponibles en 10.4.2! Haga clic para ampliar.
El comité de estándares de C ++ mantiene su recomendación sobre cómo usar C ++ moderno de manera efectiva: las pautas principales . Esta biblioteca, escrita por Microsoft, contiene un conjunto de tipos y métodos que lo ayudan a escribir código C ++ siguiendo esas pautas. Incluye elementos como span, basado en std :: span pero con comprobación de límites; not_null, lo que obliga a un puntero inteligente a no tener nunca valores nulos; afirmaciones de condiciones previas y posteriores (espera y asegura); arreglos de pila y montón; y mucho más, estos son solo algunos que personalmente encuentro útiles.
Puede leer más sobre los métodos y tipos útiles en el archivo Léame de GSL … ¡y ahora puede usarlos en C ++ Builder!
Prueba de Google
A menudo nos preguntan sobre los marcos de prueba para C ++ Builder. Recomendamos DUnit, que es compatible con C ++, y Boost también incluye Boost :: test. Sin embargo, Google Test es muy conocido e incluye el marco Google Mocks para burlarse de objetos.
Esta es una biblioteca compleja, y no solo es útil tenerla en GetIt, sino que también es una demostración clara de la calidad de la versión 10.4.2.
xtl
Xtl contiene muchos contenedores y algoritmos útiles utilizados por el marco xtensor (en el que estamos trabajando) que se utilizan a menudo en finanzas; de hecho, es parte de la pila cuantitativa xtensor. Si está buscando C ++ de alto rendimiento, este es un gran comienzo.
ACE / TAO: biblioteca de mensajería CORBA multiplataforma
ACE / TAO es una de las bibliotecas más grandes y complejas, y también una sobre la que nos preguntan muchos clientes. Durante el año pasado, hemos realizado una gran cantidad de trabajo centrándonos específicamente en esta biblioteca. Su inclusión es una clara demostración de la compatibilidad que te brinda 10.4.2. También esperamos que muchos usuarios de C ++ Builder quieran usar ACE / TAO, quizás para actualizar proyectos de varias versiones anteriores. ¡Estamos muy contentos de tenerlo en GetIt!
{fmt} El formateo seguro y muy rápido para C ++ C estándar IO y C ++ son famosos por ser difíciles de usar y, a menudo, inseguros. {fmt} es una alternativa muy popular con una sintaxis elegante, errores en tiempo de compilación, pruebas sólidas y un rendimiento excelente. A continuación, se muestran algunos fragmentos de código extraídos de su archivo Léame:
std::vector<int> v = {1, 2, 3};
fmt::print("{}n", v);
que imprime:
{1, 2, 3}
O un ejemplo de pasar el tipo incorrecto, que puede haberte mordido en tu código antes:
std::string s = fmt::format(FMT_STRING("{:d}"), "I am not a number");
Esto da un error en tiempo de compilación. Echa un vistazo al archivo Léame aquí : tiene ejemplos de código y puntos de referencia impresionantes. Esta biblioteca está pendiente de algunos trabajos finales menores, ¡pero pronto estará disponible para su uso en C ++ Builder!
Estas bibliotecas clave y útiles de código abierto dan un valor inmenso a sus proyectos. Personalmente, estoy especialmente emocionado de ver ACE / TAO (solicitado con frecuencia), la Biblioteca de soporte de pautas y {fmt}. Recuerde que usted también puede agregar cualquier biblioteca C ++ de código abierto a su código con C ++ Builder: hemos trabajado arduamente en la compatibilidad y la calidad para asegurarnos de que pueda usar cualquier código que necesite.
¡Por supuesto, eso no es todo! GetIt también incluye Boost (clásico, clang de Win32, clang de Win64), EasyBMP, marco de álgebra lineal y matemáticas Eigen, los analizadores XML Expat y TinyXML, libsimdpp (matemáticas rápidas), NemaTode (NMEA y GPS), SMHasher (funciones hash) y SDL2 (¡genial para escribir juegos!)
C ++ Builder ha tenido un trabajo constante en la compatibilidad y la solidez para garantizar que su código funcione bien cuando actualice, y que pueda usar código C ++ externo fácilmente, un gran beneficio para su software. 10.4.2 muestra bien el trabajo, con la adición de algunas bibliotecas realmente útiles y técnicamente complejas que demuestran las mejoras de esta versión. ¡Esperamos que las bibliotecas sean beneficiosas para sus proyectos! E incluso aparte de estas bibliotecas, actualice a 10.4.2 para aprovechar la calidad mejorada de su software, así como algunas de las otras mejoras de esta versión: memoria del vinculador, finalización de código y más.
C ++ possui um amplo ecossistema. Um dos nossos principais objetivos com o C ++ Builder é garantir que você possa aproveitar as vantagens das bibliotecas que outros desenvolvedores de C ++ escrevem. Com cada versão, temos trabalhado em RTL e STL para garantir que seja de alta qualidade e tenha grande compatibilidade – exatamente o que você precisa se estiver atualizando projetos ou se quiser obter o código-fonte C ++ de uma biblioteca online .
Uma grande demonstração disso é o número crescente de bibliotecas C ++ de código aberto (muitas vezes complexas) que estamos disponibilizando no GetIt, nosso gerenciador de pacotes.
10.4.2 tem cinco novas bibliotecas disponíveis , perfazendo quinze bibliotecas de código aberto no total – aumentando continuamente a cada lançamento. E essas são bibliotecas incríveis .
Biblioteca de Suporte das Diretrizes Básicas do Microsoft C ++
Muitas bibliotecas C ++ de código aberto estão disponíveis em 10.4.2! Clique para expandir.
O comitê de padrões C ++ mantém sua recomendação sobre como usar o C ++ moderno de forma eficaz – as diretrizes básicas . Esta biblioteca, escrita pela Microsoft, contém um conjunto de tipos e métodos que o ajudam a escrever código C ++ usando essas diretrizes. Inclui itens como span, baseado em std :: span mas com verificação de limites; not_null, forçando um ponteiro inteligente a nunca conter valores nulos; asserções de pré-condição e pós-condição (espera e garante); empilhar e agrupar matrizes; e muito mais – estes são apenas alguns que considero úteis pessoalmente.
Você pode ler mais sobre os métodos e tipos úteis no leia-me GSL … e agora você pode usá-los no C ++ Builder!
Teste do Google
Frequentemente somos questionados sobre estruturas de teste para C ++ Builder. Recomendamos DUnit, que oferece suporte a C ++, e Boost também inclui Boost :: test. No entanto, o Google Test é muito conhecido e inclui a estrutura Google Mocks para simulação de objetos.
Esta é uma biblioteca complexa e não só é útil para você ter no GetIt, como também é uma demonstração clara da qualidade da versão 10.4.2
xtl
Xtl contém muitos contêineres e algoritmos úteis usados pelo framework xtensor (um no qual estamos trabalhando) frequentemente usado em finanças – na verdade, é parte da pilha de quant do xtensor. Se você está procurando C ++ de alto desempenho, este é um grande começo.
ACE / TAO: biblioteca de mensagens CORBA de plataforma cruzada
ACE / TAO é uma das maiores e mais complexas bibliotecas – e também aquela sobre a qual muitos clientes nos perguntam. Durante o ano passado, fizemos um trabalho significativo com foco especificamente nesta biblioteca. Sua inclusão é uma demonstração clara da compatibilidade que 10.4.2 oferece. Também esperamos que muitos usuários do C ++ Builder queiram usar o ACE / TAO, talvez para atualizar projetos de várias versões anteriores. Estamos muito felizes em tê-lo no GetIt!
{fmt} Formatação segura e muito rápida para fluxos C ++ C padrão IO e C ++ são famosos por serem difíceis de usar e frequentemente inseguros. {fmt} é uma alternativa muito popular com sintaxe elegante, erros de tempo de compilação, testes rigorosos e excelente desempenho. Aqui estão alguns trechos de código retirados de seu leia-me:
std::vector<int> v = {1, 2, 3};
fmt::print("{}n", v);
que imprime:
{1, 2, 3}
Ou um exemplo de passagem do tipo errado, que pode ter afetado você em seu código antes:
std::string s = fmt::format(FMT_STRING("{:d}"), "I am not a number");
Isso dá um erro em tempo de compilação. Confira o leia-me aqui : ele tem exemplos de código e benchmarks impressionantes. Esta biblioteca está aguardando alguns pequenos trabalhos finais, mas em breve você poderá usar no C ++ Builder!
Essas bibliotecas de código aberto úteis e importantes fornecem um valor imenso aos seus projetos. Pessoalmente, estou especialmente animado para ver ACE / TAO (frequentemente solicitado), a Biblioteca de Suporte de Diretrizes e {fmt}. Lembre-se de que você também pode adicionar qualquer biblioteca C ++ de código aberto ao seu código com o C ++ Builder: trabalhamos muito na compatibilidade e na qualidade para garantir que você possa usar o código de que precisa.
Claro, isso não é tudo! GetIt também inclui Boost (clássico, Win32 clang, Win64 clang), EasyBMP, álgebra linear Eigen e estrutura matemática, os analisadores Expat e TinyXML XML, libsimdpp (matemática rápida), NemaTode (NMEA e GPS), SMHasher (funções hash) e SDL2 (ótimo para escrever jogos!)
O C ++ Builder tem trabalhado constantemente em compatibilidade e robustez para garantir que seu código funcione bem durante a atualização e que você possa usar código C ++ externo facilmente – um grande benefício para o seu software. 10.4.2 mostra bem o trabalho, com a adição de algumas bibliotecas realmente úteis e tecnicamente complexas que demonstram as melhorias desta versão. Esperamos que as bibliotecas sejam benéficas para seus projetos! E mesmo fora dessas bibliotecas, atualize para 10.4.2 para usar a qualidade aprimorada de seu software, bem como algumas das outras melhorias deste lançamento – memória do linker, autocompletar de código e muito mais.
C ++ имеет обширную экосистему. Одна из наших ключевых целей при создании C ++ Builder — гарантировать, что вы сможете воспользоваться преимуществами библиотек, которые пишут другие разработчики на C ++. С каждым выпуском мы работали над RTL и STL, чтобы обеспечить его высокое качество и отличную совместимость — именно то, что вам нужно, если вы обновляете проекты или хотите получить исходный код C ++ из онлайн-библиотеки. .
Отличной демонстрацией этого является растущее количество (часто сложных) библиотек C ++ с открытым исходным кодом, которые мы делаем доступными в GetIt, нашем диспетчере пакетов.
В версии 10.4.2 доступно пять новых библиотек , всего пятнадцать библиотек с открытым исходным кодом, которые неуклонно растут с каждым выпуском. И это потрясающие библиотеки.
Библиотека поддержки основных рекомендаций Microsoft C ++
Many open source C++ libraries are available in 10.4.2! Click to expand.
Комитет по стандартам C ++ сохраняет свои рекомендации по эффективному использованию современного C ++ — основные принципы . Эта библиотека, написанная Microsoft, содержит набор типов и методов, которые помогут вам писать код C ++ с использованием этих рекомендаций. Он включает такие элементы, как span, на основе std :: span, но с проверкой границ; not_null, заставляя умный указатель никогда не содержать нулевых значений; утверждения предусловия и постусловия (ожидает и обеспечивает); массивы стека и кучи; и многое другое — это лишь некоторые из них, которые я лично считаю полезными.
Вы можете прочитать больше о полезных методах и типах в GSL readme … и теперь вы можете использовать их в C ++ Builder!
Google Test
Нас часто спрашивают о тестовых фреймворках для C ++ Builder. Мы рекомендуем DUnit, который поддерживает C ++, а Boost также включает Boost :: test. Однако Google Test очень хорошо известен и включает фреймворк Google Mocks для имитации объектов.
Это сложная библиотека, и она не только полезна для вас в GetIt, но и является наглядной демонстрацией качества выпуска 10.4.2.
xtl
Xtl содержит множество полезных контейнеров и алгоритмов, используемых фреймворком xtensor (над которым мы работаем), часто используемым в финансах — фактически, он является частью стека xtensor Quant. Если вы ищете высокопроизводительный C ++, это отличное начало.
ACE / TAO: кроссплатформенная библиотека сообщений CORBA ACE / TAO — одна из самых больших и сложных библиотек, о которой нас спрашивают многие клиенты. За последний год мы проделали значительный объем работы, уделяя особое внимание этой библиотеке. Его включение — наглядная демонстрация совместимости, которую дает вам 10.4.2. Мы также ожидаем, что многие пользователи C ++ Builder захотят использовать ACE / TAO, возможно, для обновления проектов с нескольких версий назад. Мы очень рады, что это есть на GetIt!
{fmt} Безопасное и очень быстрое форматирование для C ++ Стандартные потоки ввода-вывода C и C ++ известны своей сложностью в использовании и часто небезопасны. {fmt} — очень популярная альтернатива с элегантным синтаксисом, ошибками времени компиляции, строгим тестированием и отличной производительностью. Вот несколько фрагментов кода, взятых из их readme:
std::vector<int> v = {1, 2, 3};
fmt::print("{}n", v);
который печатает:
{1, 2, 3}
Или пример передачи неправильного типа, который раньше мог вас укусить в коде:
std::string s = fmt::format(FMT_STRING("{:d}"), "I am not a number");
Это дает ошибку времени компиляции. Ознакомьтесь с файлом readme здесь : он содержит впечатляющие образцы кода и тесты производительности. Эта библиотека ожидает некоторых незначительных финальных доработок, но скоро ее можно будет использовать в C ++ Builder!
Эти ключевые полезные библиотеки с открытым исходным кодом придают огромную ценность вашим проектам. Мне лично особенно приятно увидеть ACE / TAO (часто запрашиваемую), Библиотеку поддержки рекомендаций и {fmt}. Помните, что вы тоже можете добавить в свой код любую библиотеку C ++ с открытым исходным кодом с помощью C ++ Builder: мы много работали над совместимостью и качеством, чтобы убедиться, что вы можете использовать любой код, который вам нужен.
Конечно, это еще не все! GetIt также включает Boost (классический, Win32 clang, Win64 clang), EasyBMP, линейную алгебру и математическую структуру Eigen, XML-синтаксические анализаторы Expat и TinyXML, libsimdpp (быстрая математика), NemaTode (NMEA и GPS), SMHasher (хэш-функции) и SDL2. (отлично подходит для написания игр!)
C ++ Builder постоянно работал над совместимостью и надежностью, чтобы ваш код работал хорошо при обновлении и чтобы вы могли легко использовать внешний код C ++ — большое преимущество для вашего программного обеспечения. 10.4.2 показывает работу хорошо, с добавлением некоторых действительно полезных и технически сложных библиотек, демонстрирующих улучшения в этом выпуске. Надеемся, библиотеки будут полезны для ваших проектов! И даже помимо этих библиотек, обновитесь до 10.4.2, чтобы использовать улучшенное качество вашего программного обеспечения, а также некоторые другие улучшения этого выпуска — память компоновщика, автозавершение кода и многое другое.
Do you need to list down the Task Schedulers in your machine from your Delphi App programmatically? How to enumerate among the Task Schedulers quickly ? Don’t know how to do. Don’t worry. MiTec’s System Information Management Suite’s helps to enumerate the Task Schedulers, we will learn how to use the MiTeC_TaskScheduler_TLB’s TTSTasks in this blog post.
Platforms: Windows.
Installation Steps:
You can easily install this Component Suite from GetIt Package Manager. The steps are as follows.
Navigate In RAD Studio IDE->Tools->GetIt Package Manager->select Components in Categories->Components->Trail -MiTec system Information Component Suite 14.3 and click Install Button.
Read the license and Click Agree All. An Information dialog saying ‘Requires a restart of RAD studio at the end of the process. Do you want to proceed? click yes and continue.
It will download the plugin and installs it. Once installed Click Restart now.
How to run the Demo app:
Navigate to the System Information Management Suite trails setup, Demos folder which is installed during Get It installation e.g) C:UsersDocumentsEmbarcaderoStudio21.0CatalogRepositoryMiTeC-14.3DemosDelphi23
Open the TS project in RAD studio 10.4.1 compile and Run the application.
This Demo App shows how to list down the task schedulers created in your machine, enumerate among them and access its properties.
Components used in MSIC TS Demo App:
TListView to show the Task schedulers properties.
Implementation Details:
A variable is created tslist of TTSTasks. Add list of Task schedulers to tslist using GetTaskList. Loop through the tslist count and add each task scheduler item to the list view. List down the properties such as Name, Path, Description, Author, Image, Version etc. of each Task Scheduler to list view item.
procedure TwndMain.FormCreate(Sender: TObject);
var
i: Integer;
tslist: TTSTasks;
begin
GetTaskList(tslist);
for i:=0 to High(tslist) do
with List.Items.Add do begin
Caption:=tslist[i].Path;
tslist[i].
SubItems.Add(tslist[i].ImagePath+' '+tslist[i].Args);
SubItems.Add(tslist[i].Author);
end;
end;
Display the Task Scheduler properties as shown below.
MiTeC Task Schedulers Demo
It’s that simple to enumerate Task Schedulers and list its properties for your application. Use this MiTeC component suite and get the job done quickly.
In Dev-Insider ist wieder ein interessanter Artikel erschienen mit einem Vergleich bekannter Cross-Platform-Entwicklungsumgebungen und den jeweiligen Frameworks.
Es wird gezeigt wie moderne Cross Plattform Ansätze den Entwickler dabei unterstützen, die Unterschiede zwischen Android und iOS in den Griff zu bekommen.
RAD Studio wird dabei als Entwicklungsumgebung mit dem grafischen Designer und dem User Interface beispielhaft dargestellt.
Here are the slides, demos, and replay from the webinar Hands-On with Delphi 10.4.2 providing a more in-depth and detailed look at the recent features of Delphi with a focus on 10.4.2 Sydney.
How do Delphi, WPF .NET Framework, and Electron perform compared to each other, and what’s the best way to make an objective comparison? Embarcadero commissioned a whitepaper to investigate the differences between Delphi, WPF .NET Framework, and Electron for building Windows desktop applications. The benchmark application – a Windows 10 Calculator clone – was recreated in each framework by three Delphi Most Valuable Professionals (MVPs) volunteers, one expert freelance WPF developer, and one expert Electron freelance developer. In this blog post, we are going to explore the Deployment Requirements metric which is part of the performance comparison used in the whitepaper.
Deployment Requirements
What is the file size/number of files for the compiled project? Larger application sizes require more storage on user devices and longer download times while numerically more files can increase deployment complexity.
Each framework deployed its calculator differently. Delphi created one executable file that averaged 6.4 MB. WPF created an executable file and library file totaling less than 0.1 MB. The heavyweight of the group, Electron, produced 161 files totaling 198 MB due to its Chromium browser. Although lighter weight due to its use of the .NET Framework installed on recent Windows computers, WPF scored lower than Delphi due to producing two files. The second file may be optional (though it is generated at compile time) but the dependency on the .NET Framework is not. WPF .NET Framework applications do require that the correct .NET Framework also be installed on the machine (and is generally provided by default by Microsoft depending on your version of Windows). This can get even more complicated when working with older versions of Windows. In general, one file will be easier to manage than multiple files, as it can negate the need for an installer or scripts to update the application, and reduces network bandwidth requirements and hard drive use.
Additional and larger files can significantly increase loading times across the network. As in the case of Electron, where the slowest Electron network startup time was 19.66 seconds – twenty-three times slower than its slowest local time – indicating that Electron apps would be best deployed locally for consistent user experiences and might pose a significant problem for enterprises with large networked services or remote employees.
There are application packaging tools like UPX which can combine and compress Windows binaries to reduce loading time over a network. There may be additional third tools to combine and package Delphi, WPF .NET Framework apps, and Electron apps into a single executable but all of those options require additional steps.
Let’s take a look at each framework.
What are Delphi’s deployment requirements?
Delphi compiled to one executable binary file averaging 2-8 MB in size (2 MB for VCL versions, and 8 MB for FMX versions). By default, Delphi creates a stand-alone executable with no dependencies. It does have the option to create an executable that shares packages with other executables.
What are WPF .NET Framework’s deployment requirements?
WPF compiled to 2 files that were just 55 KB in size. It is possible to create a single WPF .NET Framework executable that does not have dependencies beyond .NET Framework itself and in the coarse of the whitepaper this was done. An optional config file is the second of the 2 files. However, the project compiled with assembly DLLs is depicted below. The .NET Framework installer is also included in the file listing. However, different versions of Windows provide different versions of the .NET Framework pre-installed. Find out which versions of .NET Framework come pre-installed on each version of Windows.
What are Electron’s deployment requirements?
Electron compiled to 151 files that measured 198 MB in size. Here is a list of all of the files that were deployed with the Electron application.
In conclusion, Delphi provides a single zero dependency executable by default with no configuration changes. Additional steps could be taken to compress the Delphi executable for even faster network load times. WPF .NET Framework does make it possible to create a single executable using the correct deployment configuration but it still requires the correct version of the .NET Framework exists on the machine where the binary is executed. Electron may have additional configuration options to reduce the overall files needed to deploy an Electron application but overall it is probably going to require a significant number of files to deploy. Electron can have significant degraded load times when executed over a network.
Explore all the metrics in the “Discovering The Best Developer Framework Through Benchmarking” whitepaper:
Ich habe einige zusätzliche Blog-Beiträge mit Informationen zu neuen Funktionen in der neuesten Version von Delphi, C ++ Builder und RAD Studio gesammelt, die sehr positives Feedback von externen Kunden erhalten.
In diesem Blog-Beitrag möchte ich auf einige der Funktionen und das Feedback eingehen, das wir erhalten haben, hauptsächlich in Bezug auf interne und externe Blog-Beiträge.
Microsoft WebView2-Unterstützung (auch bekannt als Edge Chromium)
Wir unterstützen jetzt die Release-Version, die gegen die offizielle Kontrolle arbeitet und nicht erfordert, dass Sie Edge (Canary oder Regular) installieren, sondern nur die „Control“ -Version. Jim hat einen sehr schönen Blog-Beitrag über die Verbesserungen an der TEdgeBrowser-Komponente, einschließlich der Schritte zum Konfigurieren Ihrer Anwendung für die Verwendung der veröffentlichten Version von WebView2 in 10.4.2:
Verbesserungen in der Leistung des Delphi-Compilers sind in 10.4.2 sehr bedeutend (obwohl die tatsächlichen Auswirkungen stark von der Größe und Struktur der einzelnen Anwendungen abhängen). Ein paar Blog-Beiträge:
Die Wiederholung des Start-Webinars ist übrigens unter https://youtu.be/AZ4ba9Tf3qE verfügbar (beachten Sie, dass bei allen Q & A-Aufzeichnungen für 3 Sitzungen die Gesamtlänge mehr als 4 Stunden beträgt, die Hauptpräsentation jedoch weniger als eine Stunde dauert).
He recopilado algunas publicaciones de blog adicionales con información sobre nuevas funciones en la última versión de Delphi, C ++ Builder y RAD Studio, que está recibiendo comentarios muy positivos de los clientes externos.
En esta publicación de blog, quiero mencionar algunas de las características y los comentarios que recibimos, principalmente en referencia a publicaciones de blog internas y externas.
Compatibilidad con Microsoft WebView2 (también conocido como Edge Chromium)
Ahora admitimos la versión de lanzamiento, que va en contra del control oficial y no requiere que instales Edge (Canary o regular) sino solo la versión de “control”. Jim tiene una publicación de blog muy agradable sobre las mejoras en el componente TEdgeBrowser, incluidos los pasos sobre cómo configurar su aplicación para usar la versión publicada de WebView2 en 10.4.2:
Las mejoras en el rendimiento del compilador Delphi son muy significativas en 10.4.2 (aunque los efectos reales dependen en gran medida del tamaño y estructura de cada aplicación). Algunas publicaciones de blog:
Demostraciones del seminario web de lanzamiento y otras sesiones
Jim ha recopilado algunas demostraciones utilizadas para demostrar 10.4.2, incluido el nuevo componente TControlList VCL, en un repositorio de GitHub en
Por cierto, la repetición del seminario web de lanzamiento está disponible en https://youtu.be/AZ4ba9Tf3qE (observe que con todas las grabaciones de preguntas y respuestas de 3 sesiones, la duración total es de más de 4 horas, pero la presentación principal es de menos de una hora).
Lista de control de VCL y soporte de escritorio remoto
Eu coletei algumas postagens de blog adicionais com informações sobre novos recursos na versão mais recente do Delphi, C ++ Builder e RAD Studio, que está recebendo comentários de clientes externos muito positivos.
Nesta postagem do blog, quero abordar alguns dos recursos e os comentários que recebemos, principalmente se referindo a postagens de blog internas e externas.
Suporte Microsoft WebView2 (também conhecido como Edge Chromium)
Agora oferecemos suporte para a versão de lançamento, que funciona contra o controle oficial e não exige que você instale o Edge (Canary ou regular), mas apenas a versão de “controle”. Jim tem uma postagem de blog muito boa sobre as melhorias no componente TEdgeBrowser, incluindo as etapas sobre como configurar seu aplicativo para usar a versão lançada do WebView2 em 10.4.2:
As melhorias no desempenho do compilador Delphi são muito significativas na versão 10.4.2 (embora os efeitos reais dependam muito do tamanho e da estrutura de cada aplicativo). Algumas postagens de blog:
A propósito, o replay do webinar de lançamento está disponível em https://youtu.be/AZ4ba9Tf3qE (observe que com todas as gravações de perguntas e respostas de 3 sessões, a duração total é superior a 4 horas, mas a apresentação principal tem menos de uma hora).
Lista de controle VCL e suporte a área de trabalho remota
Я собрал несколько дополнительных сообщений в блоге с информацией о новых функциях в последней версии Delphi, C ++ Builder и RAD Studio, которая получает очень положительные отзывы внешних клиентов.
В этом сообщении в блоге я хочу коснуться некоторых функций и отзывов, которые мы получили, в основном относящихся к внутренним и внешним сообщениям в блогах.
Поддержка Microsoft WebView2 (также известного как Edge Chromium)
Теперь мы поддерживаем версию выпуска, которая работает против официального контроля и не требует установки Edge (Canary или обычную), а только «контрольную» версию. У Джима есть очень хорошая запись в блоге об улучшениях в компоненте TEdgeBrowser, включая шаги по настройке вашего приложения для использования выпущенной версии WebView2 в 10.4.2:
Улучшения в производительности компилятора Delphi очень значительны в версии 10.4.2 (хотя реальные эффекты сильно зависят от размера и структуры каждого приложения). Несколько сообщений в блоге:
Кстати, повтор запуска вебинара доступен по адресу https://youtu.be/AZ4ba9Tf3qE (обратите внимание, что со всеми записями вопросов и ответов для 3 сессий общая продолжительность составляет более 4 часов, но основная презентация длится менее часа).
Список управления VCL и поддержка удаленного рабочего стола
Most of the developers already familiar with HTTP web services. HTTP is a synchronous protocol so the client waits for the server to respond which comes at the cost of poor scalability. And synchronous communication is problematic when it comes to high load systems. Moreover, HTTP is one-way that clients cannot passively receive commands from the network.
For these causes, most high-performance scalable systems utilize an asynchronous messaging bus, rather than web services for internal data interchange. We have already discussed a similar thing before, the Advanced Message Queuing Protocol (AMQP). AMQP is for reliability and interoperability in the enterprise world. But it is not suitable for resource-constrained IoT applications. For this, we can utilize MQTT (Message Queuing Telemetry Transport) which is lightweight and flexible. This protocol is built for the publish and subscribe model.
So, if you are working on heavy IoT projects, and need to communicate between sensors with other systems, MQTT is the only well-suited solution for you.
How we can utilize MQTT?
With the TMS MQTT library, this is easy. TMS MQTT is a cross-platform messaging client library implementing the full MQTT specification. It supports Delphi and C++ Builder and you can deploy your solutions to Windows, Android, iOS, macOS, and Linux
What features does TMS MQTT have?
MQTT client component
Can be used in VCL, FMX, and LCL
Supports Windows, iOS, Android, macOS, Linux, and Raspberry Pi
Secure and non-secure connections
and more
How I can download & install it to my RAD Studio?
You can easily install the library from the GetIt Package Manager in the IDE.
In Windows using Credentials Manager user can easily identify the cached network credentials. However, Sometimes Developers need to list down the cached windows network credentials from a Delphi App programmatically? Don’t know how to do. Don’t worry. MiTec’s System Information Management Suite’s helps to enumerate the Network Credentials, we will learn how to use the MiTeC_NetCreds Component in this blog post.
Platforms: Windows.
Installation Steps:
You can easily install this Component Suite from GetIt Package Manager. The steps are as follows.
Navigate In RAD Studio IDE->Tools->GetIt Package Manager->select Components in Categories->Components->Trail -MiTec system Information Component Suite 14.3 and click Install Button.
Read the license and Click Agree All. An Information dialog saying ‘Requires a restart of RAD studio at the end of the process. Do you want to proceed? click yes and continue.
It will download the plugin and installs it. Once installed Click Restart now.
How to run the Demo app:
Navigate to the System Information Management Suite trails setup, Demos folder which is installed during Get It installation e.g) C:UsersDocumentsEmbarcaderoStudio21.0CatalogRepositoryMiTeC-14.3DemosDelphi24
Open the NetCreds project in RAD studio 10.4.1 compile and Run the application.
This Demo App shows how to list down the Network credentials, enumerate among them and access its properties.
Components used in MSIC TS Demo App:
TListView to show the cached Network Credentials and its properties.
TButton to save the listed Network Credentials to a .sif file and close the application.
Implementation Details:
An Instance is created NC of TMiTeC_NetCreds. Loop through the NC Record count and add Network Credentials item to the list view. List down the properties such as Credentials Type, Target, Timestamp, Username and Password of each TCredRecord item to list view item. Credential Type can be Generic, Domain Password, Domain Certificate etc..
procedure TForm1.cmRefresh(Sender: TObject);
var
i: Integer;
begin
NC:=TMiTeC_NetCreds.Create(Self);
NC.RefreshData;
List.Items.Clear;
for i:=0 to NC.RecordCount-1 do
with List.Items.Add do begin
case NC.Records[i].Typ of
1: Caption:='Generic';
2: Caption:='Domain Password';
3: Caption:='Domain Certificate';
4: Caption:='Domain Visible Password';
5: Caption:='Generic Certificate';
6: Caption:='Domain Extended';
7: Caption:='Maximum';
1007: Caption:='Maximum Extended';
else Caption:=IntToStr(NC.Records[i].Typ);
end;
SubItems.Add(DateTimeToStr(NC.Records[i].Timestamp));
SubItems.Add(NC.Records[i].Target);
SubItems.Add(NC.Records[i].Username);
SubItems.Add(NC.Records[i].Password);
end;
Caption:=Format('Network Credentials - %d items',[List.Items.Count]);
end;
Display the Network Credentials properties as shown below.
It’s that simple to enumerate cached network credentials and list its properties for your application. Use this MiTeC component suite and get the job done quickly.
I’ve collected a few additional blog posts with information about new features in the latest version of Delphi, C++Builder and RAD Studio, which is receiving very positive external customers feedback.
In this blog post I want to touch on some of the features and the feedback we received, mostly referring to internal and external blog posts.
Microsoft WebView2 Support (aka Edge Chromium)
We now support the release version, which works against the official control and doesn’t require you to install Edge (Canary or regular) but only the “control” version. Jim has a very nice blog post on the improvements on the TEdgeBrowser component, including the steps on how to configure your application to use the released version of WebView2 in 10.4.2:
Improvements in Delphi compiler performance are very significant in 10.4.2 (although the real effects depend heavily on each application size and structure). A few blog posts:
By the way, the launch webinar replay is available at https://youtu.be/AZ4ba9Tf3qE (notice that with all Q&A recordings for 3 sessions the total length is over 4 hours, but the main presentation is less than one hour long).
すでに、10.4.2のトライアル版が利用可能になっており、今後製品を購入いただくと、10.4.2をダウンロードいただけるようになります。また、すでに製品をお持ちの方は、有効なアップデートサブスクリプションがあれば、既存のライセンスを使用してRAD Studio 10.4.2をご利用いただけます。10.4.2のダウンロードは、新しいカスタマーポータルサイト(my.embarcadero.com)から行えます。
How do Delphi, WPF .NET Framework, and Electron perform compared to each other, and what’s the best way to make an objective comparison? Embarcadero commissioned a whitepaper to investigate the differences between Delphi, WPF .NET Framework, and Electron for building Windows desktop applications. The benchmark application – a Windows 10 Calculator clone – was recreated in each framework by three Delphi Most Valuable Professionals (MVPs) volunteers, one expert freelance WPF developer, and one expert Electron freelance developer. In this blog post, we are going to explore the Target Platforms metric which is part of the flexibility comparison used in the whitepaper. The calculator project focuses on Windows but both Delphi and Electron support more than just Windows. This article will explore the additional platform support of the frameworks.
Target Platforms
How many user platforms can the framework deploy an application to? Great frameworks will support most platforms on the market, whether mobile, desktop, 32-bit, or 64-bit. Businesses benefit from multi-platform support because they can develop and maintain one codebase to reach many customers. One codebase rather than separate code for each target application reduces development time, bug potential, maintenance requirements, and time-to-market for new features.
Delphi’s major advantage over WPF and Electron is that its FMX framework can deploy one body of source code as a binary to any major desktop or mobile platform, maximizing a business’s reach to customers and minimizing code duplication and maintenance/upgrade headaches. It can support projects of every size from logic controllers for industrial automation to world-wide inventory management, and be developed for every tier from a database-heavy back end to the GUI client-side of an application.
WPF with .NET Framework targets Windows computers directly and but the source code is only usage on other platforms with additional work. Additionally, .NET Framework is a legacy framework now according to Microsoft. The framework is primarily geared toward client-side desktop applications but can incorporate business logic in C# for middle-tier or back-end functions and access the ADO .NET Entity Framework for databases.
Electron is an open-source framework managed by GitHub (owned by Microsoft) targeting desktop operating systems such as Windows, macOS, and Linux through its Chromium browser base. Coding can be done using Javascript inside the browser webview or at the nodejs level. It focuses on client-side applications, typically web-centric, but uses node.js for middle-tier and back-end services. Delphi itself can also be used to build Electron application using third party solutions like TMS Web Core.
Delphi can compile to native 32-bit or 64-bit code for Windows using the VCL framework and compile to 32-bit or 64-bit code for Windows, macOS, Android, iOS, and Linux using the FMX framework. Delphi is the ultimate rapid application development environment for quickly developing high-performance native cross-platform applications in modern Object Pascal. Utilize powerful award winning visual design tools and an integrated toolchain for rapidly designing and developing visually stunning apps to reach billions of users on Windows, macOS, iOS, Android and Linux devices with a single codebase and responsive UI. Leverage powerful database access components, cloud libraries, and data binding technologies to confidently deliver projects on time and under budget. Independent developers and enterprise development teams love Delphi because it delivers 5x the development productivity across desktop and mobile platforms. A third party library called ScriptGate also allows a smaller and tighter WebView integration similar to Electron from within Delphi.
Platforms
Applications
32-bit Windows
32-bit FireMonkey applications
64-bit Windows
64-bit FireMonkey applications
macOS, either 32-bit (Delphi and C++) or 64-bit kernel (Delphi only)
32-bit FireMonkey applications
iOS Device – 32-bit (C++ and Delphi) or simulator (Delphi)
WPF .NET Framework can compile to managed code for Windows. As was found in the IP Security section of the whitepaper managed code like is used within WPF .NET Framework apps by default is easily decompiled and readable by end users. In 2021 Windows is only one of the major platforms out there. Building an app in WPF .NET Framework leaves a developer building all of the other platforms in a different tool or codebase. With the numbers above Windows is only 36.27% of the device market. By using WPF .NET Framework a developer is not able to target 63.73% of the devices on the market. According to Microsoft:
“To put it very simply, managed code is just that: code whose execution is managed by a runtime. In this case, the runtime in question is called the Common Language Runtime or CLR, regardless of the implementation (for example, Mono, .NET Framework, or .NET Core/.NET 5+). CLR is in charge of taking the managed code, compiling it into machine code and then executing it.”
Electron
Electron officially packages for cross-platform use within the Chromium browser rather than compiling to native code. The back end of an Electron app is NodeJS while the front end is JavaScript within Chromium. A developer must also choose a front end framework (or no framework) like Angular, Vue.js, React, and others. Based on the above platform numbers Electron can only target 45.35% of the devices out there leaving developers to target the other 54.65% of the devices with other solutions. It is possible there are third party solutions to have Electron run on Android and iOS but they can have an unstable life as Apple banned iOS apps built in Electron in an incident in 2019. Electron itself can be targeted as a platform from Delphi using the third party TMS Web Core framework.
macOS – Only 64bit binaries are provided for macOS, and the minimum macOS version supported is macOS 10.10 (Yosemite). Native support for Apple Silicon (arm64) devices was added in Electron 11.0.0.
Windows – Windows 7 and later are supported, older operating systems are not supported (and do not work). Both ia32 (x86) and x64 (amd64) binaries are provided for Windows. Native support for Windows on Arm (arm64) devices was added in Electron 6.0.8.. Running apps packaged with previous versions is possible using the ia32 binary.
Linux – The prebuilt binaries of Electron are built on Ubuntu 18.04. Whether the prebuilt binary can run on a distribution depends on whether the distribution includes the libraries that Electron is linked to on the building platform, so only Ubuntu 18.04 is guaranteed to work, but following platforms are also verified to be able to run the prebuilt binaries of Electron: Ubuntu 14.04 and newer, Fedora 24 and newer, Debian 8 and newer
Let’s recap. Delphi supports Android, iOS, macOS, Windows, and Linux with native code in a single codebase and single UI. WPF .NET Framework is managed code on Windows only but some of the code may be portable. Electron officially supports macOS, Linux, and Windows through Javascript in a webview and nodejs. Delphi can also be used to target Electron via third party libraries. Overall Delphi providers broader and tighter deployment options than the other two frameworks.
Explore all the metrics in the “Discovering The Best Developer Framework Through Benchmarking” whitepaper:
FlexCel is one of the most powerful and extensive component suites for native Excel report & file generation with VCL and FireMonkey. It supports more than 300 Excel functions for calculation. The best thing is that it is a cross-platform component suite and supports 100% native support for reading and creating Excel files.
TMS FlexCel supports more than that. For instance, you can export .XLS & .XLSX files to SVG and creates complex reports using Excel that supports images, comments, pivot tables, charts, conditional formats, and almost anything you need.
Many developers like the abstractions of the TMS FlexCel that provides you to build applications in no time without writing too much code. Furthermore, FlexCel has full documentation and more than 50+ samples which shows all the functionalities of the TMS FlexCel components.
var
work: TWorkspace;
begin
// The workspace will own the TXLsFile objects,
// so we won't need to free them.
// If we set the parameter to false,
// we would have to manually free the TXLsFile objects
work := TWorkspace.Create(True);
try
work.Add('xls1', TXlsFile.Create('File1.xlsx'));
work.Add('xls2', TXlsFile.Create('File2.xlsx'));
work.Add('xls3', TXlsFile.Create('File3.xlsx'));
// Either work.Recalc, xls1.Recalc, xls2.Recalc or xls3.Recalc will recalculate all the files in the workspace.
work.Recalc(True);
finally
work.Free;
end;
end;
FastReport VCL & FMX – is an add-on component that allows your application to generate reports swiftly and efficiently. FastReport gives all the essential tools to develop reports, including a visual report designer, a reporting core, and a preview window. It can be utilized in the Delphi, C++Builder, and RAD Studio environments.
Moreover, the best thing is that now FastReport FMX works with Linux. You can deploy your FastReport FMX based solutions to Linux.
What are the features of FastReport?
Advanced report designer
Data Grouping and Master-Detail reports
Caching of the big reports
Exports to popular formats (PDF, RTF, HTML, BMP, JPEG, TIFF, GIF, TxT, CSV)
Report inheritance
UNICODE support
Report encryption
Nested reports by using subreport object
Dot Matrix reports
Linear barcodes
Composite reports
procedure TForm1.frxReport1BeforePrint(Sender: TfrxReportComponent);
begin
if Sender.Name = 'Picture1' then
TfrxPictureView(Sender).Picture.Assign(
Chart1.TeeCreateMetafile(False,
Rect(0, 0, Round(Sender.Width), Round(Sender.Height))));
end;
procedure Page1OnManualBuild(Sender: TfrxComponent);
var
i, j: Integer;
SaveY: Extended;
begin
SaveY := Engine.CurY;
for j := 1 to 2 do
begin
for i := 1 to 6 do
begin
Engine.ShowBand(MasterData1);
Engine.ShowBand(MasterData2);
if i = 3 then
Engine.CurY := Engine.CurY + 10;
end;
Engine.CurY := SaveY;
Engine.CurX := Engine.CurX + 200;
end;
end;
Be sure to head over and check out the FastReport Report Generator on the GetIt portal and download it from the IDE
When did you start using RAD Studio/Delphi and have long have you been using it?
I have been developing my software since the days of Turbo Pascal. I converted all of my software (I have multiple programs named PROLINES, WINGS, LOFT, FOIL) to Delphi when the first version came out for WIN 3.1. So I have been using Delphi since the beginning 26 years ago.
What was it like building software before you had RAD Studio/Delphi?
I initially began by creating my own pull down windows software using a set of tools called Metagraphics and Turbo Pascal. I had to create my own pull downs, detect clicks on window items, create dialog windows etc. I was thrilled when I discovered Delphi and immediately began to learn OOP and Windows programming with Delphi. The Metagraghics package provided drivers for the best known graphics cards of the time but inevitably they failed to keep up with the constantly changing world of graphics cards. As a result some users of my software were forced to use it in very low resolution (Old VGA and EGA) default drivers. Printing the screen was a mess etc. With the advent of Delphi for Windows – I was able to massively improve my product offerings and have continued to update and improve them over the last 20 years. Most recently I have been using the latest versions of Delphi to provide more access to WINDOWS 10 features and especially provide superb graphics resolution for my heavily graphics oriented software with the new DPI features. I had all but given up being able to do really fast 3D rendering of the Hull, Keel and Rudder designs created by my users until Firemonkey. I have chosen to keep my Windows 10 programs (I have over 150,000 lines of code invested in the 4 programs PROLINES, WINGS, FOIL and LOFT) and will add 3D rendering (TMesh with NURB Surfaces) by making a separate FIREMONKEY program to be called by my existing VCL based programs when 3D rendering is desired. Previously I had used DLL access to OpenGL but this DLL became outdated and was no longer supported by the original author. One of the key values of the most recent version of Delphi is providing enduring support for legacy programs that are now all over 25 years old.
How did RAD Studio/Delphi help you create your showcase application?
Delphi has been critical to me for over 25 years as I developed 4 major programs for Boat Design and Analysis Software (PROLINES, WINGS, LOFT and FOIL). Because of the success of Delphi as a RAD system I have been able to find complementary products for software copy protection and advanced 3D Graphics before the advent of Firemonkey. The ability to quickly generate beautiful dialog boxes for data entry and presentation of analysis results. The inclusion of TCHART has been superb for my very technical engineering software that frequently generates data as a function of speed, angle etc.
What made RAD Studio/Delphi stand out from other options?
The very fast learning curve, relative ease of learning the Delphi IDE and built in support for graphics (2D/ 3D Computer Aided Design). OOP language is excellent for my engineering software as it provides excellent means of variable typing and computational accuracy. Generating complex calculations / formulas in Delphi is critical to my success. Because I use NURB Surfaces (Non-Uniform Rational B-Splines) my calculations are recursive as the formulas are not closed form but use U / V parametric terms. Therefore generating planar cuts through a NURB surface requires special algorithms that use iterative solutions. Therefore computational speed of the compiled software is critical to have a useful experience for users. I could not choose to use any interpreted language for my applications as the computational load would result in dramatic usability issues with large lags and delays in screen updates etc. Computational speed of Delphi allows real time editing and updating of the entire NURB surface regardless of complexity of the shape.
What made you happiest about working with RAD Studio/Delphi?
Rapid product development, support of 3D CAD type drawings for rendering, high speed calculations by my software, support for High DPI monitors that are critical to fine design details in CAD applications. I am also very pleased with debugging facilities that have helped resolve complex issues in programs that run nearly 50,000 lines of code.
What have you been able to achieve through using RAD Studio/Delphi to create your showcase application?
I created a long lived business in CAD tools for boat and yacht design that has now been in use for over 30 years (before Delphi – I used Turbo Pascal). That business allowed me to save for two college educations, cash savings, two weddings for our children and stability for us over the last 3 decades. Delphi was a huge leap in the ease of creating WINDOWS based software for me. I had no hope of moving my TP products to Windows and was very concerned that I would lose my business, until Delphi appeared and saved my future! At one point I was associated with 3 different America’s Cup challenge teams and wrote many articles in national (USA) magazines on Boat and Yacht Design for speed and efficiency. I did all this while working full time as an Electrical Engineer developing state of the art low power RADAR systems first for Boeing Aerospace and later for Honeywell Aerospace in Advanced Technology. I have 48 patents to my name in the US and several more overseas in the EU. Having a fast, fun, easy to debug development system has been a God send to me as a developer.
What are some future plans for your showcase application?
I plan to continue to add new computational features to all of my programs, starting with PROLINES 8, but extending to WINGS, LOFT and FOIL. I also plan to re-create 2 other programs that have not worked on new PC’s since WIN 3. I will use all of the RAD features of Delphi and look forward to adding to my software offerings. I also plan to add excellent rendering features to all of my software by creating a master Firemonkey based rendering program that can be called by all of my existing software and that will eventually allow my clients to create fully rendered versions of their designs that will include keels (sailboat), rudders, deck structures and more.
Thanks David! You can check out his software’s showcase entry below.
The Delphi ecosystem has dozens of component partners that help developers amazing and complex applications faster. One of them is /n software. They provide a wide range of components for developers.
E-Payment library simplifies E-Commerce development and offers 100+ payment gateways. E-Payment library comes with several complete demo applications, for instance, the ExpressCheckout demo shows you how to obtain payment quickly through PayPal using the ExpressCheckout.
Moreover, thousands of developer across the world love its best features like:
Credit Card processing and eCheck support
AVS support
256-bit SSL encryption and Digital Certificates
Check21 electronic check image processing support
and more
This is an example of fetching a token and redirecting to PayPal’s site
expresscheckout1.OrderTotal = "88.88";
expresscheckout1.ReturnURL = "http://localhost/example/return/url";
expresscheckout1.CancelURL = "http://localhost/example/cancel/url";
expresscheckout1.PaymentAction = ExpresscheckoutPaymentActions.aSale;
expresscheckout1.SetCheckout();
// Now check for success and redirect the buyer:
if (expresscheckout1.Ack == "Success")
{
// Redirect is not a component method and should be implemented externally
Redirect("https://www.sandbox.paypal.com/cgi-bin/webscr?cmd=_express-checkout&token=" + expresscheckout1.ResponseToken)
}
Head over and check out the E-Payment Library on the GetIt portal and download it within the IDE
Embarcadero Technologies has a set of industry-ready solutions in the GetIt portal. Most of the complete applications come with their documentation and reusable component and modules.
These template applications are in general form and apply the latest Delphi and C++ Builder development patterns and best use cases.
Now, I would like to introduce the Restaurant Ordering Template. This Restaurant Ordering template gives a regular structure of the online ordering application.
The Restaurant Ordering template is based on cross-platform development patterns and offers easy developer customization and deployment.
If you are not going to create something like this application, you should check it out. Because it has several FireMonkey development best use cases. For instance,
RAD Studio 10.4.2がリリースされました。新機能に加え、大幅な品質改善が加えられた新リリースは、10.4 Sydneyと10.4.1の品質向上にフォーカスしたリリースをベースとしています。RAD Studio 10.4.2では、多くの開発者の生産性にフォーカスを置いた機能強化が行われております。このブログでは、10.4.2で改善されたIDEの機能にフォーカスして紹介いたします。
RAD Studio 10.4.2では、今回紹介した機能の他に、IDEの品質改善を含めたお客様から寄せられた報告である600以上の問題に対処しています。今回のリリースでは、大規模のプロジェクトに対応するなどIDEの応答性も向上し、以前のバージョンと比べて使いやすくなっています。是非 10.4.2へアップデートしてご確認ください。
今すぐ10.4.2を使い始めよう
すでに、10.4.2のトライアル版が利用可能になっており、今後製品を購入いただくと、10.4.2をダウンロードいただけるようになります。また、すでに製品をお持ちの方は、有効なアップデートサブスクリプションがあれば、既存のライセンスを使用してRAD Studio 10.4.2をご利用いただけます。10.4.2のダウンロードは、新しいカスタマーポータルサイト(my.embarcadero.com)から行えます。
With the release of 10.4.2 Sydney, the TEdgeBrowser for Delphi, C++Builder, and RAD Studio now works with the released version of Microsoft Edge and the Microsoft Edge WebView2 Runtime. This VCL component offers a number of improvements over the existing TWebBrowser, and the one I want to show you now is how to render arbitrary HTML, run arbitrary JavaScript, and grab the HTML Source from a web page.
First off all, to install TEdgeBrowser there are three steps:
Evergreen Bootstraper– Downloads and installs the latest version of the runtime matching the current architecture. Use this to install with your application.
Evergreen Standalone Installer – Offline installer for Win32 or Win64
Fixed Version – Allows you to download and install previous versions instead of the latest one offered by the Evergreen installers.
If you are building a Win32 application, then install the x86 version. If you are building a Win64 application, then install the x64 version. Or install both!
At runtime your program needs to access to WebView2Loader.dll, the easiest way to do this is either place it on the path, or use a Post Build event
For Win32: copy /y "$(BDS)Redistwin64WebView2Loader.dll" "$(OUTPUTDIR)"
For Win64: copy /y "$(BDS)Redistwin32WebView2Loader.dll" "$(OUTPUTDIR)"
When deploying your application you need to include the correct WebView2Loader.dll and make sure the Microsoft EdgeView2 Runtime is installed. At some point that will be installed by default, but the easiest way is use the Evergreen Bootstrapper.
Displaying Arbitrary HTML
To load arbitrary HTML into the TEdgeView, call the NavigateToString method passing the HTML as a string.
EdgeBrowser1.NavigateToString('<html><body><h1>Hello From Delphi</h1></body></html>');
And your HTML will immediately render in the browser. No need to save it to a file or other such silliness.
Executing Arbitrary JavaScript
You can execute JavaScript in the TEdgeBrowser with ExecuteScript method
This script is executed in the current page, just as if you opened the JavaScript console in the browser, so you are able to interact with the DOM as well.
View The Source
The fundamentals of viewing the source of a page start with the ExecuteScript method, but are a bit more involved. There is an event handler for when a your script finishes executing since it executes asynchronously. You can use that even to return a JSON Object. In this case, the JSON Object is the HTML Source.
This uses the DOM (Document Object Model) to grab the Document Element (the root element, or
<HTML>) and request the outerHTML, which is all the HTML including the Document Element. We then use encodeURL to encode it. Now for the
OnExecuteScriptevent handler
uses
System.NetEncoding;
procedure TEdgeViewForm.EdgeBrowser1ExecuteScript(Sender: TCustomEdgeBrowser;
AResult: HRESULT; const AResultObjectAsJson: string);
begin
if AResultObjectAsJson <> 'null' then
memoHTML.Text := TNetEncoding.URL.Decode(AResultObjectAsJson).DeQuotedString('"');
end;
This event fires after every call to
ExecuteScript, but the
AResultObjectAsJsonwill be
'null' if no results are returns. So we just ignore the ‘null’ values. Otherwise we use the
TNetEncoding.URL.Decode to remove the encoding, and remove the quotes with
DeQuotedString('"').
You can take the results of this and send right back in with a call to
NavigateToString().
If you need a high-quality diagram editing environment the TMS Diagram Studio is for you. With the TMS Diagram Studio, you can add diagram and flowchart capabilities to your application.
The TMS Diagram Studio has an editing behavior similar to standard diagramming applications. And provides a high-quality drawing of blocks and lines. Since, there are ready-to-use flowcharts, different arrow types, and electric blocks you can easily create a diagram in no time.
What are the features of TMS Diagram Studio?
Diagram snap grid, background image, rulers, saving and loading
Support for different layers
Live diagram execution, live flowcharts and able to do clipboard operations
Full customization: color, width, height, pen, brush and text customization
C++ is one of the top programming languages in the software development world. With the tons of libraries, developers spend less time making scientific and complex applications. One of the best-known libraries in the C++ community is the Boost library. Boost community does research and creates new features all the time, because of that the Boost library is updated every 3-5 months.
Boost library offers a wide range of simplification for C++ devs. Boost library is a portable, open-source, free and active library. Boost library promotes efficient and readable C++ code. Since several Standards Committee members are also the most active members in the Boost community you can find many correct and useful techniques to tackle advanced problems.
Here are some of the tools of the Boost library:
Smart Ptr, Pool
Utility, Log, UUID
Thread, Date-Time, Filesystem, Asio
Algorithms
Wide range of special Data Structures
and more
I have seen that competitive programmers love the ideas built in the Boost library and apply the Boost library features when solving problems.
Basic example of using various functions provided by this library
#include <boost/locale.hpp>
#include <iostream>
#include <ctime>
int main()
{
using namespace boost::locale;
using namespace std;
generator gen;
locale loc=gen("");
// Create system default locale
locale::global(loc);
// Make it system global
cout.imbue(loc);
// Set as default locale for output
cout <<format("Today {1,date} at {1,time} we had run our first localization example") % time(0)
<<endl;
cout<<"This is how we show numbers in this locale "<<as::number << 103.34 <<endl;
cout<<"This is how we show currency in this locale "<<as::currency << 103.34 <<endl;
cout<<"This is typical date in the locale "<<as::date << std::time(0) <<endl;
cout<<"This is typical time in the locale "<<as::time << std::time(0) <<endl;
cout<<"This is upper case "<<to_upper("Hello World!")<<endl;
cout<<"This is lower case "<<to_lower("Hello World!")<<endl;
cout<<"This is title case "<<to_title("Hello World!")<<endl;
cout<<"This is fold case "<<fold_case("Hello World!")<<endl;
}
C++ Builder developers utilize Boost library easily in the IDE.
TCoffeeAndCode kehrt im März für drei weitere Sitzungen zurück und wechselt zu einer neuen Zeit von 13.00 Uhr GMT (14.00 Uhr MEZ), um eine Mittagspause zu ermöglichen. Außerdem können einige unserer amerikanischen Freunde mit uns frühstücken!
Di – 16. März – 13 Uhr GMT / 14 Uhr MEZ / 8 Uhr CDT
Informieren Sie sich über RAD Studio 10.4.2
Ein Chat mit Marco Cantu und David Millington über 10.4.2, der seit unserem letzten TCoffeeAndCode veröffentlicht wurde
Di – 23. März – 13 Uhr GMT / 14 Uhr MEZ / 8 Uhr CDT
Modernes UI-Design
Eine Diskussion mit Ian Barker (MVP) und Dr. Holger Flick (TMS) über moderne UI-Designs und wie Sie Ihre UI modern aussehen lassen können.
Di – 30. März – 13 Uhr GMT / 14 Uhr MEZ / 7 Uhr CDT
IoT und Daten
Eintauchen in einige der Technologien, die bei der Bewegung von Daten im IoT-Raum helfen
Wir sind bestrebt, die Sitzungen geringfügig auf etwa 30 Minuten Diskussion zu verkürzen. Wenn der Chat jedoch läuft, können Sie sich gerne etwas länger aufhalten.
Wir werden mit einem Rückblick auf das, was seit unserer letzten TCoffeeAndCode-Sitzung passiert ist, beginnen und einen speziellen Gastredner aus dem PM-Team mitmachen!
Freuen Sie sich darauf, Sie online zu sehen, wenn wir über alles #RAD sprechen
TCoffeeAndCode regresará para 3 sesiones más en marzo, moviéndose a un nuevo horario de 1 p.m. GMT (2 p.m. CET) para permitir un descanso a la hora del almuerzo, y también permitir que algunos de nuestros amigos estadounidenses se unan a nosotros para un desayuno, ¡bru!
Martes – 16 de marzo – 1pm GMT / 2pm CET / 8am CDT
Póngase al día con RAD Studio 10.4.2
Una charla con Marco Cantu y David Millington sobre 10.4.2 que se lanzó desde nuestro último TCoffeeAndCode
Martes – 23 de marzo – 1pm GMT / 2pm CET / 8am CDT
Diseño de interfaz de usuario moderno
Una discusión con Ian Barker (MVP) y el Dr. Holger Flick (TMS) sobre los diseños de IU modernos y cómo mantener su IU con un aspecto moderno.
Martes – 30 de marzo – 1pm GMT / 2pm CET / 7am CDT
IoT y datos
Profundizar en algunas de las tecnologías que ayudan con el movimiento de datos dentro del espacio de IoT
Nuestro objetivo es acortar las sesiones un poco a unos 30 minutos de discusión, pero si el chat fluye, puede quedarse un poco más.
Comenzaremos con una puesta al día de lo que ha sucedido desde nuestra última sesión de TCoffeeAndCode, ¡y también se unirá un orador invitado especial del equipo de PM!
Esperamos verte en línea mientras hablamos de todas las cosas #RAD
TCoffeeAndCode está retornando para mais 3 sessões em março, mudando para um novo horário de 13h GMT (14h CET) para permitir uma pausa para o almoço, e também permitir que alguns de nossos amigos americanos se juntem a nós para um café da manhã bru!
Ter – 16 de março – 13h GMT / 14h CET / 8h CDT
Fique por dentro do RAD Studio 10.4.2
Uma conversa com Marco Cantu e David Millington sobre 10.4.2, que foi lançado desde nosso último TCoffeeAndCode
Ter – 23 de março – 13h GMT / 14h CET / 8h CDT
Design de IU moderno
Uma discussão com Ian Barker (MVP) e Dr. Holger Flick (TMS) sobre designs de IU modernos e como manter sua IU com aparência moderna.
Ter – 30 de março – 13h GMT / 14h CET / 7h CDT
IoT e dados
Mergulhar em algumas das tecnologias que ajudam na movimentação de dados no espaço de IoT
Nosso objetivo é encurtar um pouco as sessões para cerca de 30 minutos de discussão, mas se o bate-papo estiver fluindo, você pode ficar um pouco mais.
Estaremos começando com uma atualização do que aconteceu desde nossa última sessão TCoffeeAndCode, e teremos um palestrante especial da equipe de PM se juntando também!
Esperamos vê-lo online enquanto conversamos sobre todas as coisas #RAD
TCoffeeAndCode вернется еще на 3 занятия в марте, перейдя на новое время — 13:00 по Гринвичу (14:00 по центральноевропейскому времени), чтобы сделать перерыв на обед, а также дать возможность некоторым из наших американских друзей присоединиться к нам на завтрак!
Вт — 16 марта — 13:00 по Гринвичу / 14:00 по центральноевропейскому времени / 8:00 по центральноевропейскому времени
Познакомьтесь с RAD Studio 10.4.2
Беседа с Марко Канту и Дэвидом Миллингтоном о версии 10.4.2, выпущенной после нашего последнего TCoffeeAndCode
Вт — 23 марта — 13:00 по Гринвичу / 14:00 по центральноевропейскому времени / 8:00 по центральноевропейскому времени
Современный дизайн пользовательского интерфейса
Обсуждение с Яном Баркером (MVP) и доктором Холгером Фликом (TMS) о современном дизайне пользовательского интерфейса и о том, как сохранить современный пользовательский интерфейс.
Вт — 30 марта — 13:00 по Гринвичу / 14:00 по центральноевропейскому времени / 7:00 по центральноевропейскому времени
Интернет вещей и данные
Погружение в некоторые технологии, помогающие перемещать данные в пространстве Интернета вещей.
Мы стремимся немного сократить сеансы до примерно 30 минут обсуждения, но если чат идет непрерывно, вы можете поболтать с ним еще немного.
Мы начнем с того, что произошло с момента нашего последнего сеанса TCoffeeAndCode, и к нам присоединится специальный приглашенный спикер из команды PM!
С нетерпением жду встречи с вами в сети, поскольку мы обсуждаем все темы #RAD
Es gab mal Zeiten, da waren 1024×768 das Maß aller Dinge bei der Auflösung von Bildschirmen. Kleinere Monitore mit 800×600 Punkten und die „Luxusmodelle“ betrieb man mit 1280×1024.
Diese Zeiten sind längst vorbei. Heute gibt es ein breites Spektrum an Displaygrößen, physikalischen Auflösungen und Pixeldichten. Die eigentliche Displaygröße (zumeist gemessen in Zoll) korreliert aber nicht oder nicht zwangsläufig mit der physikalischen Auflösung (gemessen in Pixeln X×Y). So sind typische Vertreter von Full-HD-Fernsehern mit 55 oder gar 65 Zoll mit Auflösungen von 1920×1080 üblich. Auf der anderen Seite hat das aktuelle iPhone 12 Pro Max auf 6,7 Zoll eine physikalische Auflösung von 2778×1295 Pixeln. Circa ein Zehntel der Displaydiagonale bei höherer Auflösung.
Natürlich hat dies auch etwas mit dem Betrachtungsabstand zu: Ein Fernseher ist typisch einige Meter von meinen Augen entfernt. Ein Smartphone eher im Bereich der zweistelligen Zentimeter. Im Nahbereich kann das menschliche Auge „höher auflösen“. Apple nennt das „Retina Display“ oder gar „Super Retina Display“.
Eine massgebliche Zahl, die diese Bildschirme beschreibt, sind die DPI: Dots per Inch. Früher waren das zumeist (die 1024x768er-Zeiten) 96 DPI: Also auf einem Quadratzoll (2,54×2,54 cm) waren das 96×96 Pixel. Da war die Welt noch in Ordnung beziehungsweise für den Windows Desktop-Entwickler einfach. Heute gibt es auf Smartphones, Tablets aber auch auf dem Desktop mit Windows Bildschirme mit 300 und mehr DPI: Notebooks mit 13 Zoll Display und 3840×2160 Pixeln (331 DPI).
Bemerkung: Ich mache hier keinen Unterschied zwischen DPI und PPI, was bei der Druckausgabe durchaus einen Unterschied macht. DPI und PPI werden hier synonym verwandt.
Betrachtet wird hier auch ausschliesslich Windows 10 (in neueren Editionen; ab 1703 Creators Update), da diese das Skalieren von Anwendungen beherrschen. Über Monitor / Per Monitor (v1) wird ab Windows 8.1 unterstützt.
Icons von Icons8
Wo liegt die Problematik?
Benutzer können Monitore in vielerlei Hinsicht einsetzen. Mit unterschiedlichen Konfigurationen und Setups
Personen, die größere Schriftarten gegenüber mehr Inhalt bevorzugen (höhere Skalierung)
Personen, die mehrer Monitore einsetzen (zB ein 24″ Gerät mit nativen 1920×1080 und ein 30″ Display mit 3840×2160) mit unterschiedlichen Skalierungsfaktoren in Windows (100% vs 200%)
Umschalten vom internen Display eines Notebooks zu einem externen Display (Docking) mit höherer Auflösung
Remote-Sessions über Remotedesktop, Teamviewer, AnyDesk, VNC, ….. mit unterschiedlichen Skalierungen
Die Dynamik beim temporären Umschalten der DPI Skalierung: Reagiert die Anwendung darauf?
Windows Anwendungen auf Basis der klassischen Win32/Win64-API (wie zB Delphi/C++Builder) können Informationen darüber beinhalten, ob diese sich an die Skalierung anpassen können. Der Windows Task-Manager kann dieses sogar anzeigen:
Was beutetet das nun?
Dazu mal eine Tabelle. Bilder sind „klickbar“ zum Vergrößern.
(Eine Windows Desktopanwendung mit Delphi. Eine Toolbar mit Icons (16×16) und eine Grafik (Cat content!) in 150×150 Pixeln ohne Stretched-Property- Die DPI Unterstützung ist eine Projektoption in Delphi/C++Builder unter Project | Optionen -> Anwendung -> Manifest -> DPI Unterstützung // „Per Monitor v2“ == „Über Monitor v2“)
DPI Unterstützung
/
Keiner
Per Monitor v2
GDI Skalierung
100%
200%
Bei 100% Skalierung (von Windows) sieht das alles erwartungsgemäß gut aus. Problematisch werden dann hohe DPI (200%) Konfigurationen:
Mit „Keiner“ DPI Unterstützung sehen die Schrift und die Icons schlecht aus
Mit „Per Monitor v2“ sieht die Schrift gut aus, aber die Icons und die Grafik werden nicht hochskaliert (das liegt daran, daß die Grundlage der Grafiken weiterhin bei 16×16 bzw 150×150 Pixeln bleibt)
Mit „GDI Skalierung“ sieht der Text gut aus und die Grafiken (Icons, Bild) werden hochskaliert….. immerhin.
Dabei bedeuten die Einstellungen folgendes:
Keiner / DPI Unaware:
Dies sind Apps, die immer unter der Annahme von 100 % Skalierung (96 DPI) gerendert werden. Es wird kein Versuch unternommen, die Skalierung durch die App selbst auszugleichen.
Unbekannt:
Keine explizite Einstellungen für die DPI Darstellung
System bekannt / System DPI Aware:
Dies sind Apps, die den DPI des Hauptbildschirms zum Zeitpunkt der Anmeldung des Benutzers an seinem Computer kennen (genannt „System-DPI“). Diese Apps skalieren gut auf dem Hauptdisplay, sehen aber auf einem sekundären Display mit einer anderen DPI unscharf aus.
Über Monitor / Per Monitor (v2):
Dies sind Apps, die Inhalte mit unterschiedlichen DPI rendern und die DPI-Skalierung on the fly ändern können, wenn die Anwendungen zwischen Monitoren mit unterschiedlichen DPI verschoben werden. Wenn sie gut gemacht sind, sehen diese Apps unabhängig von der DPI des Monitors gut aus.
– Die Anwendung wird benachrichtigt, wenn sich die DPI ändert (sowohl die Top-Level- als auch die Child-HWNDs)
– Die Anwendung sieht die rohen Pixel jeder Anzeige
– Die Anwendung wird nie von Windows als Bitmap skaliert
– Automatische DPI-Skalierung von Nicht-Client-Bereichen (Fensterbeschriftung, Bildlaufleisten usw.) durch Windows
– Win32-Dialoge (aus CreateDialog) werden von Windows automatisch DPI-skaliert
– Bitmaps aus Windows Controls (Checkboxen, Radiobuttons, etc) werden automatisch mit dem entsprechenden DPI-Skalierungsfaktor gerendert Wichtiger Hinweis: Microsoft hat „Per Monitor V2“ nicht für MDI Anwendungen und deren untergeordneten Formulare angepasst. Das sieht das alles wieder anders aus. Im wahrsten Sinne des Wortes.
GDI Skalierung:
Skalierung über GDI. Verbessert die Lesbarkeit von Texten enorm. Kann aber „Clipping“ und „Kerning Probleme“ verursachen: Schrift wird nicht richtig dargestellt und kann unterschiedliche Lauflängen haben (Kerning) oder in dem vorgesehen Platz nicht mehr reinpassen (Clipping)
Die Problematik der Grafiken liegt darin begründet, daß es keine hochauflösenden Bilder und/oder Icons als Ressourcen gibt, die von Windows skaliert werden können. Dabei ist das nicht nur auf „echte Grafiken“ beschränkt, sondern auch auf GUI-Elemente:
100% / 200% mit per Monitor V2 / 200% mit GDI Skalierung
NB: Die Einstellungen, die man in den Projekteigenschaften machen kann, kann man auch innerhalb von Windows überschreiben. In den Eigenschaften einer Anwendung. Interessanterweise ist dieser Windows-Dialog selbst nicht skalierend; er hat auf 200% verschwommenen Text:
Dabei entspricht das „System (Enhanced)“ der GDI Skalierung.
Wie hilft einem Delphi / C++Builder / RAD Studio?
Generell empfiehlt es sich, die „Per Monitor v2“ Skalierung zu benutzen. Grafiken / Icons können über die Kompontnen der ImageCollection/VirtualImageList und VirtualImage auch passend („mit mehr Grafikinformation“) vorgehalten und richtig skaliert werden. Siehe dazu das Video von Olaf Monien (siehe unten).
Auf der obersten Ebene wurden in der Unit VCL.Classes von 10.3, und mit 10.4 verbessert, die folgenden Änderungen vorgenommen, um dieses Feature zu unterstützen:
Die globale Funktion GetSystemMetricsForWindow kapselt einen Aufruf der neuen Funktion GetSystemMetricsForDPI, falls verfügbar, ansonsten der herkömmlichen Funktion GetSystemMetrics. Sie hat einen Handle-Parameter, der an die API übergeben wird. Wir empfehlen Ihnen, diese neue Funktion anstelle der herkömmlichen Funktion WinApi.GetSystemMetrics zu verwenden, wenn Sie Per Monitor V2 unterstützen möchten.
Die Methode TControl.GetSystemMetrics (nIndex: Integer) gibt den Wert der Systemmetriken für die Steuerelemente zurück, die die neue globale Funktion GetSystemMetricsForWindow aufrufen.
TControl.GetCurrentPPI gibt abhängig vom aktuellen Bildschirm die DPI für das Steuerelement zurück und TControl.CurrentPPI ist eine schreibgeschützte Eigenschaft, die dieser Funktion zugeordnet ist.
In RAD Studio 10.4 wurde die Architektur der VCL-Stile zur Unterstützung von High-DPI-Grafik und 4K-Monitoren wesentlich erweitert.
In früheren Versionen von RAD Studio wurde ein einzelnes Bild für alle grafischen Elemente eines Stils und spezifische Informationen zu den Größen der Elemente verwendet.
Jetzt werden alle grafischen Elemente automatisch für die richtige Auflösung des Monitors skaliert, auf dem das Element angezeigt wird. Das bedeutet, dass die Skalierung von der DPI-Auflösung des Zielcomputers bzw. des aktuellen Monitors bei Mehrmonitorsystemen abhängt.
Mit dem Stil-Designer können jetzt weitere Bilder für bestimmte Objekte und verschiedene Auflösungen einbezogen werden.
Für die Namen der neuen Elemente wird die Konvention „Name des Objekts + DPI-Information“ verwendet, mit einem Unterstrich zwischen dem Namen und der Größeninformation im folgenden Format:
[Name]_15x (für 150% DPI)
[Name]_20x (für 200% DPI)
RAD Studio 10.4 fügt nur Elemente für 150% und 200% DPI für die meisten der im Produkt vorhandenen VCL-Stile hinzu.
TCoffeeAndCode is returning back for 3 more sessions in March, moving to a new time of 1pm GMT (2pm CET) to enable a lunchtime break, and also enable some of our American friends to join us for a breakfast bru!
Tue – 16th March – 1pm GMT / 2pm CET / 8am CDT
Catch up around RAD Studio 10.4.2
A chat with Marco Cantu and David Millington about 10.4.2 which released since our last TCoffeeAndCode
Tue – 23rd March – 1pm GMT / 2pm CET / 8am CDT
Modern UI Design
A discussion with Ian Barker (MVP) and Dr Holger Flick (TMS) about modern UI designs and how to keep your UI looking modern.
Tue – 30th March – 1pm GMT / 2pm CET / 7am CDT
IoT and Data
Diving into some of the technologies helping with the movement of data within the IoT space
We are aiming to shorten the sessions slightly to around 30 minutes of discussion, but if the chat is flowing, you are welcome to hang around a little longer.
We will be kicking off with a catch-up of what has happened since our last TCoffeeAndCode session, and have a special guest speaker from the PM team joining too!
Look forward to seeing you online as we talk all things #RAD
Falcon 9 – First Stage Simulatorは、エンジニアリング方程式を使用し、Falcon 9(SpaceX)ロケット 第1ステージの軌道の制御に加え、動作、進行力、パフォーマンスをシミュレートします。このソフトウェアは、Delphiによって構築されています。開発者は次のように解説しています。
「構造情報、エンジンの性能特性、スロットル制御曲線、機体の傾きなどをロードするいくつものパネルがあります。メインパネルには、シミュレーションの進行状況がリアルタイム表示されます。これは、OpenGLウィンドウの3Dモデル表示です。シミュレーションの最初の段階では、打ち上げ時の実際のビデオと同期し、シミュレーションの精度を評価するために各瞬間の速度と高度を比較できるようにしています。そしてシミュレーションが完了したら、すべての情報をExcelファイルにダウンロードして評価用のグラフを作成できます。アプリケーションのすべてのパラメーター(構造、エンジン、制御情報)は完全にカスタマイズ可能なので、必要なすべてのミッションを構成してシミュレーションでき、その結果を実際のビデオとリアルタイムで比較できます。アプリケーションはまだ開発中であり、シミュレーターでは、打ち上げ段階から、第2ステージの分離までシミュレートできるようになります。このアプリケーションは、Delphi Community Edition 10.3を使用して作成されました。」
All Database backup software reduces time, money and data loss due to corruption or system failure. With InterBase server, you get the same level of protection built in at no additional cost to you. Database backups can be stored locally or in a shared environment as part of your disaster recovery plan. There are several reasons why you would want to back up your InterBase databases, they are as follows:
Preserves data by making a copy of both the data and the data structures (metadata).
Improves database performance – balances indexes and performs garbage collection on outdated records.
Reclaims space occupied by deleted records.
Gives you the chance to change the database page size and of distributing the database among multiple files/discs when restoring.
Backup your database
With InterBase, you have two options to backup your database:
gbak (command-line tool)
Use the InterBase gbak command to specify and execute backup and restore operations from a Windows or Unix command line. Familiarity with isql, InterBase version of SQL is recommended. isql provides a number of options to help tailor your backup and restore to suit different circumstances and environments.
IBConsole
A user interface that has a series of option boxes for specifying the type of backup and restore you want to perform.
Restore your database
The process to restore an InterBase database is quite simple. You have many options available to alter your database; you can change the page size, restore or create the database, etc.
One note to add: When restoring a database, do not replace a database that is currently in use.
Database Validation
There are several reasons why you should validate your databases.
Whenever a database backup is unsuccessful.
Whenever an application receives a “corrupt database” error.
Periodically, to monitor for corrupt data structures or misallocated space.
Any time you suspect data corruption.
InterBase makes it easy to validate your database. With a few clicks or keystrokes you can check on your database. In IBConsole,
Select a disconnected database in the Tree pane and double-click Validation in the Work pane.
Right-click a disconnected database in the Tree pane and choose Validation from the context menu.
Select Database -> Maintenance -> Validation.
To validate database:
Check that the database indicated is correct. If it is not, cancel this dialog and re-initiate the Database Validation dialog under the correct database.
Specify which validation options you want by clicking in the right column and choosing True or False from the drop-down list. See the table below for a description of each option.
Click OK if you want to proceed with the validation, otherwise click Cancel.
When IBConsole validates a database, it verifies the integrity of data structures. Specifically, it does the following:
Reports corrupt data structures.
Reports misallocated data pages.
Returns orphan pages to free space.
Check out this short video on how to quickly backup and restore an InterBase database.
Of course, when you choose to backup your databases is based on your specific needs. In some cases, it could be daily, so check out how to use Incremental dumps to maintain your databases over time.
BeaconFence is a new developer proximity solution it gives you GPS free indoor and outdoor location tracking and events that take place as the user of the smartphone application moves inside of an area or inside of a zone.
BeaconFence has a mapping technology that allows you to define rectangular and radial zones for any physical layout. With the BeaconFence technology, you can track location information down to inches, furthermore, you can track intersections, like enters and exits with callback events.
Why this is so cool technology? Because GPS is not good while you are in the building. BeaconFence technology can be used in buildings, retails shops, or factory floors.
How can I add proximity tracking to desktop and mobile apps?
Here in this short video, you can have a full idea about BeaconFence.
If you would like to learn more about this technology you can watch these free webinars by Embarcadero Technologies:
RAD Studio comes with hundreds of visual and non-visual components to make developers’ life easier. But not knowing how to use them is the problem most of the time. I have seen many new VCL and FMX developers try to do something within a hard way. So, we are here for you to help on making things easy for you.
Most of the time problems occur when a developer does not know what he has in his development environment. For instance, creating visually-stunning cross-platform user interfaces with FireMonkey sometimes can be intimidating. But, knowingwhere and when to utilize the exact component is a good sign of a RAD developer.
Here, I would like to discuss the TMultiView component. TMultiView component allows you to create a master-detail interface quickly. So, you can implement a master-detail interface that can be used for any available platform available in the RAD Studio.
How can I create a cross-platform master detail interface?
The Multiview Navigation demo shows you how to implement a master-detail interface and display the Multiview control as a slide-in drawer, popover menu, docked panel, and several modes.
procedure TForm1.ListBox1ItemClick(const Sender: TCustomListBox; const Item: TListBoxItem);
begin
Item.IsSelected := False;
MultiView1.HideMaster;
end;
procedure TForm1.MultiView1PresenterChanging(Sender: TObject; var PresenterClass: TMultiViewPresentationClass);
begin
if PresenterClass = TMultiViewNavigationPanePresentation then
begin
MasterButton.Visible := False;
MultiView1.MasterButton := MasterButton2;
end
else
begin
MasterButton2.Visible := False;
MultiView1.MasterButton := MasterButton;
end;
end;
procedure TForm1.nbDurationSlidingChange(Sender: TObject);
begin
MultiView1.DrawerOptions.DurationSliding := nbDurationSliding.Value;
end;
Be sure to head over and check out the Multiview Navigation demo on the GetIt portal and download it from the IDE using the GetIt Package Manager
Alle Datenbanksicherungssoftware reduziert Zeit-, Geld- und Datenverluste aufgrund von Beschädigungen oder Systemausfällen. Mit dem InterBase-Server erhalten Sie das gleiche integrierte Schutzniveau ohne zusätzliche Kosten. Datenbanksicherungen können im Rahmen Ihres Notfallwiederherstellungsplans lokal oder in einer gemeinsam genutzten Umgebung gespeichert werden. Es gibt mehrere Gründe, warum Sie Ihre InterBase-Datenbanken sichern möchten. Diese sind folgende:
Erhält Daten, indem eine Kopie sowohl der Daten als auch der Datenstrukturen (Metadaten) erstellt wird.
Verbessert die Datenbankleistung – gleicht Indizes aus und führt eine Speicherbereinigung für veraltete Datensätze durch.
Fordert Speicherplatz zurück, der von gelöschten Datensätzen belegt wird.
Bietet Ihnen die Möglichkeit, die Seitengröße der Datenbank zu ändern und die Datenbank beim Wiederherstellen auf mehrere Dateien / Datenträger zu verteilen.
Sichern Sie Ihre Datenbank
Mit InterBase haben Sie zwei Möglichkeiten, Ihre Datenbank zu sichern:
gbak ( Kommandozeilen-Tool)
Verwenden Sie den Befehl InterBase gbak , um Sicherungs- und Wiederherstellungsvorgänge über eine Windows- oder Unix-Befehlszeile anzugeben und auszuführen. Vertrautheit mit isql, InterBase-Version von SQL wird empfohlen. isql bietet eine Reihe von Optionen, mit denen Sie Ihre Sicherung und Wiederherstellung an unterschiedliche Umstände und Umgebungen anpassen können.
IBConsole
Eine Benutzeroberfläche mit einer Reihe von Optionsfeldern zum Festlegen der Art der Sicherung und Wiederherstellung, die Sie ausführen möchten.
Stellen Sie Ihre Datenbank wieder her
Das Wiederherstellen einer InterBase-Datenbank ist recht einfach. Sie haben viele Möglichkeiten, Ihre Datenbank zu ändern. Sie können die Seitengröße ändern, die Datenbank wiederherstellen oder erstellen usw.
Ein Hinweis zum Hinzufügen: Ersetzen Sie beim Wiederherstellen einer Datenbank keine Datenbank, die derzeit verwendet wird.
Datenbankvalidierung
Es gibt mehrere Gründe, warum Sie Ihre Datenbanken validieren sollten.
Immer wenn eine Datenbanksicherung nicht erfolgreich ist.
Immer wenn eine Anwendung den Fehler „Beschädigte Datenbank“ erhält.
In regelmäßigen Abständen auf beschädigte Datenstrukturen oder falsch zugewiesenen Speicherplatz überwachen.
Jedes Mal, wenn Sie eine Datenbeschädigung vermuten.
InterBase erleichtert die Validierung Ihrer Datenbank. Mit wenigen Klicks oder Tastenanschlägen können Sie Ihre Datenbank überprüfen. In IBConsole,
Wählen Sie im Bereich „Baum“ eine nicht verbundene Datenbank aus und doppelklicken Sie im Bereich „Arbeit“ auf „Validierung“.
Klicken Sie im Strukturbereich mit der rechten Maustaste auf eine nicht verbundene Datenbank und wählen Sie im Kontextmenü die Option Validierung.
Wählen Sie Datenbank -> Wartung -> Validierung.
So validieren Sie die Datenbank:
Überprüfen Sie, ob die angegebene Datenbank korrekt ist. Ist dies nicht der Fall, brechen Sie dieses Dialogfeld ab und starten Sie das Dialogfeld „Datenbanküberprüfung“ unter der richtigen Datenbank erneut.
Geben Sie an, welche Validierungsoptionen Sie möchten, indem Sie in die rechte Spalte klicken und in der Dropdown-Liste die Option Wahr oder Falsch auswählen. In der folgenden Tabelle finden Sie eine Beschreibung der einzelnen Optionen.
Klicken Sie auf OK, wenn Sie mit der Validierung fortfahren möchten, andernfalls klicken Sie auf Abbrechen.
Wenn IBConsole eine Datenbank validiert, überprüft es die Integrität von Datenstrukturen. Im Einzelnen wird Folgendes ausgeführt:
Meldet beschädigte Datenstrukturen.
Meldet falsch zugewiesene Datenseiten.
Gibt verwaiste Seiten auf freien Speicherplatz zurück.
In diesem kurzen Video erfahren Sie, wie Sie eine InterBase-Datenbank schnell sichern und wiederherstellen können.
Wenn Sie sich für die Sicherung Ihrer Datenbanken entscheiden, richtet sich dies natürlich nach Ihren spezifischen Anforderungen. In einigen Fällen kann es täglich sein. Lesen Sie daher, wie Sie inkrementelle Speicherauszüge verwenden , um Ihre Datenbanken im Laufe der Zeit zu verwalten.
Todo el software de respaldo de la base de datos reduce el tiempo, el dinero y la pérdida de datos debido a la corrupción o falla del sistema. Con el servidor InterBase, obtiene el mismo nivel de protección integrado sin costo adicional para usted. Las copias de seguridad de la base de datos se pueden almacenar localmente o en un entorno compartido como parte de su plan de recuperación ante desastres. Hay varias razones por las que querría hacer una copia de seguridad de sus bases de datos de InterBase, son las siguientes:
Conserva los datos haciendo una copia tanto de los datos como de las estructuras de datos (metadatos).
Mejora el rendimiento de la base de datos: equilibra los índices y realiza la recolección de basura en registros obsoletos.
Recupera el espacio ocupado por registros eliminados.
Le da la oportunidad de cambiar el tamaño de la página de la base de datos y de distribuir la base de datos entre varios archivos / discos al restaurar.
Haga una copia de seguridad de su base de datos
Con InterBase, tiene dos opciones para hacer una copia de seguridad de su base de datos:
gbak ( herramienta de línea de comandos)
Utilice el comando gbak de InterBase para especificar y ejecutar operaciones de copia de seguridad y restauración desde una línea de comandos de Windows o Unix. Se recomienda estar familiarizado con isql, la versión InterBase de SQL. isql proporciona una serie de opciones para ayudar a personalizar su copia de seguridad y restauración para adaptarse a diferentes circunstancias y entornos.
IBConsole
Una interfaz de usuario que tiene una serie de casillas de opciones para especificar el tipo de copia de seguridad y restauración que desea realizar.
Restaura tu base de datos
El proceso para restaurar una base de datos de InterBase es bastante simple. Tiene muchas opciones disponibles para modificar su base de datos; puede cambiar el tamaño de la página, restaurar o crear la base de datos, etc.
Una nota para agregar: al restaurar una base de datos, no reemplace una base de datos que esté actualmente en uso.
Validación de la base de datos
Hay varias razones por las que debería validar sus bases de datos.
Siempre que una copia de seguridad de la base de datos no se realice correctamente.
Siempre que una aplicación reciba un error de “base de datos corrupta”.
Periódicamente, para monitorear estructuras de datos corruptas o espacio mal asignado.
Siempre que sospeche de la corrupción de datos.
InterBase facilita la validación de su base de datos. Con unos pocos clics o pulsaciones de teclas, puede verificar su base de datos. En IBConsole,
Seleccione una base de datos desconectada en el panel de árbol y haga doble clic en Validación en el panel de trabajo.
Haga clic con el botón derecho en una base de datos desconectada en el panel de árbol y elija Validación en el menú contextual.
Seleccione Base de datos -> Mantenimiento -> Validación.
Para validar la base de datos:
Compruebe que la base de datos indicada sea correcta. Si no es así, cancele este cuadro de diálogo y reinicie el cuadro de diálogo Validación de la base de datos en la base de datos correcta.
Especifique qué opciones de validación desea haciendo clic en la columna de la derecha y eligiendo Verdadero o Falso en la lista desplegable. Consulte la tabla siguiente para obtener una descripción de cada opción.
Haga clic en Aceptar si desea continuar con la validación; de lo contrario, haga clic en Cancelar.
Cuando IBConsole valida una base de datos, verifica la integridad de las estructuras de datos. Específicamente, hace lo siguiente:
Reporta estructuras de datos corruptas.
Reporta páginas de datos mal asignadas.
Devuelve las páginas huérfanas al espacio libre.
Vea este breve video sobre cómo hacer una copia de seguridad y restaurar rápidamente una base de datos de InterBase.
Por supuesto, cuando elige hacer una copia de seguridad de sus bases de datos, se basa en sus necesidades específicas. En algunos casos, podría ser diario, así que compruebe cómo utilizar los volcados incrementales para mantener sus bases de datos a lo largo del tiempo.
Todo o software de backup de banco de dados reduz a perda de tempo, dinheiro e dados devido à corrupção ou falha do sistema. Com o servidor InterBase, você obtém o mesmo nível de proteção integrado sem nenhum custo adicional para você. Os backups de banco de dados podem ser armazenados localmente ou em um ambiente compartilhado como parte de seu plano de recuperação de desastres. Existem várias razões pelas quais você deseja fazer backup de seus bancos de dados InterBase, são as seguintes:
Preserva os dados fazendo uma cópia dos dados e das estruturas de dados (metadados).
Melhora o desempenho do banco de dados – equilibra os índices e executa a coleta de lixo em registros desatualizados.
Recupera o espaço ocupado por registros excluídos.
Oferece a oportunidade de alterar o tamanho da página do banco de dados e de distribuir o banco de dados entre vários arquivos / discos durante a restauração.
Faça backup do seu banco de dados
Com o InterBase, você tem duas opções para fazer backup de seu banco de dados:
gbak ( ferramenta de linha de comando)
Use o comando gbak do InterBase para especificar e executar operações de backup e restauração a partir de uma linha de comando do Windows ou Unix. Recomenda-se familiaridade com isql, versão InterBase do SQL. O isql fornece várias opções para ajudar a personalizar seu backup e restauração para atender a diferentes circunstâncias e ambientes.
IBConsole
Uma interface de usuário que possui uma série de caixas de opções para especificar o tipo de backup e restauração que deseja executar.
Restaure seu banco de dados
O processo de restauração de um banco de dados InterBase é bastante simples. Você tem muitas opções disponíveis para alterar seu banco de dados; você pode alterar o tamanho da página, restaurar ou criar o banco de dados, etc.
Uma observação a acrescentar: ao restaurar um banco de dados, não substitua um banco de dados que esteja em uso.
Validação de banco de dados
Existem vários motivos pelos quais você deve validar seus bancos de dados.
Sempre que um backup de banco de dados não é bem-sucedido.
Sempre que um aplicativo recebe um erro de “banco de dados corrompido”.
Periodicamente, para monitorar estruturas de dados corrompidas ou espaço mal alocado.
Sempre que você suspeitar de corrupção de dados.
O InterBase facilita a validação de seu banco de dados. Com alguns cliques ou teclas, você pode verificar seu banco de dados. No IBConsole,
Selecione um banco de dados desconectado no painel Árvore e clique duas vezes em Validação no painel Trabalho.
Clique com o botão direito em um banco de dados desconectado no painel Árvore e escolha Validação no menu de contexto.
Selecione Banco de dados -> Manutenção -> Validação.
Para validar o banco de dados:
Verifique se o banco de dados indicado está correto. Se não estiver, cancele esta caixa de diálogo e reinicie a caixa de diálogo Validação do banco de dados no banco de dados correto.
Especifique quais opções de validação você deseja clicando na coluna à direita e escolhendo Verdadeiro ou Falso na lista suspensa. Consulte a tabela abaixo para obter uma descrição de cada opção.
Clique em OK se quiser continuar com a validação, caso contrário, clique em Cancelar.
Quando o IBConsole valida um banco de dados, ele verifica a integridade das estruturas de dados. Especificamente, ele faz o seguinte:
Relatórios de estruturas de dados corrompidos.
Relata páginas de dados mal alocadas.
Retorna páginas órfãs para o espaço livre.
Confira este pequeno vídeo sobre como fazer backup e restaurar um banco de dados InterBase rapidamente.
Claro, quando você escolhe fazer backup de seus bancos de dados é baseado em suas necessidades específicas. Em alguns casos, pode ser diário, portanto, verifique como usar despejos incrementais para manter seus bancos de dados ao longo do tempo.
Все программы для резервного копирования баз данных сокращают время, деньги и потери данных из-за повреждения или сбоя системы. С сервером InterBase вы получаете такой же уровень встроенной защиты без каких-либо дополнительных затрат. Резервные копии базы данных могут храниться локально или в общей среде как часть вашего плана аварийного восстановления. Есть несколько причин, по которым вы захотите создать резервную копию своих баз данных InterBase, они следующие:
Сохраняет данные, делая копии как данных, так и структур данных (метаданных).
Повышает производительность базы данных — балансирует индексы и выполняет сборку мусора для устаревших записей.
Освобождает место, занятое удаленными записями.
Дает вам возможность изменить размер страницы базы данных и распределить базу данных между несколькими файлами / дисками при восстановлении.
Сделайте резервную копию вашей базы данных
С InterBase у вас есть два варианта резервного копирования базы данных:
gbak ( инструмент командной строки)
Используйте команду InterBase gbak, чтобы указать и выполнить операции резервного копирования и восстановления из командной строки Windows или Unix. Знакомство с isql, рекомендуется версия SQL InterBase. isql предоставляет ряд параметров, помогающих адаптировать резервное копирование и восстановление к различным обстоятельствам и средам.
IBConsole
Пользовательский интерфейс, в котором есть ряд полей для выбора типа резервного копирования и восстановления, которые вы хотите выполнить.
Восстановите вашу базу данных
Процесс восстановления базы данных InterBase довольно прост. У вас есть много вариантов для изменения вашей базы данных; вы можете изменить размер страницы, восстановить или создать базу данных и т. д.
Одно примечание, которое следует добавить: при восстановлении базы данных не заменяйте базу данных, которая в настоящее время используется.
Проверка базы данных
Есть несколько причин, по которым вам следует проверять свои базы данных.
Всякий раз, когда резервное копирование базы данных оказывается неудачным.
Каждый раз, когда приложение получает ошибку «поврежденная база данных».
Периодически, чтобы отслеживать поврежденные структуры данных или неправильно выделенное пространство.
Каждый раз, когда вы подозреваете повреждение данных.
InterBase упрощает проверку вашей базы данных. С помощью нескольких щелчков мыши или нажатия клавиш вы можете проверить свою базу данных. В IBConsole
Выберите отключенную базу данных на панели «Дерево» и дважды щелкните «Проверка» на панели «Работа».
Щелкните правой кнопкой мыши отключенную базу данных на панели «Дерево» и выберите «Проверка» в контекстном меню.
Выберите База данных -> Обслуживание -> Проверка.
Чтобы проверить базу данных:
Убедитесь, что указанная база данных верна. Если это не так, отмените это диалоговое окно и повторно запустите диалоговое окно «Проверка базы данных» для правильной базы данных.
Укажите, какие параметры проверки вы хотите, щелкнув в правом столбце и выбрав True или False из раскрывающегося списка. См. Описание каждой опции в таблице ниже.
Нажмите OK, если хотите продолжить проверку, в противном случае нажмите Отмена.
Когда IBConsole проверяет базу данных, она проверяет целостность структур данных. В частности, он делает следующее:
Сообщает о поврежденных структурах данных.
Сообщает о неправильно размещенных страницах данных.
Возвращает потерянные страницы на свободное место.
Посмотрите это короткое видео о том, как быстро создать резервную копию и восстановить базу данных InterBase.
Конечно, когда вы выбираете резервное копирование своих баз данных, это зависит от ваших конкретных потребностей. В некоторых случаях это может быть каждый день, поэтому узнайте, как использовать инкрементные дампы для поддержки ваших баз данных с течением времени.
Häufiger kommt die Frage, wie man vorgeht, wenn man eine aktuelle Version gekauft hat, aber (vorerst) nur eine ältere Version einsetzen will, weil man zB zuerst das Altprojekt noch mit einer älteren Version von Delphi/C++Builder bearbeiten will.
Beispiel: Man kauft eine aktuelle 10.4.x Lizenz von Delphi/C++Builder oder RAD Studio. Man möchte aber zuerst die Version 2009 benutzen, um noch Änderungen an seinem Projekt vorzunehmen. Die (gekaufte) 10.4er Lizenz will man noch nicht installieren.
Was braucht man:
Ein unterstütztes Betriebssystem (es gibt zT Probleme mit alten Versionen (älter als XE8) unter Windows 10)
Man bekommt die direkten Downloadlinks während der Anforderung per eMail zugesandt oder kann bei einem Lizenzserver die Dateien durch Upload der SLIP Datei auf https://my.embarcadero.com herunterladen
Eine angeforderte Seriennummer (individuell oder generisch; dazu gleich mehr)
Ich empfehle gerne den Einsatz von Altversionen auf einem passenden Betriebssystem innerhalb einer Virtuellen Maschine.
Man muss zunächst unterscheiden, welche Art von Lizenz man erworben hat: Entweder eine Benutzerlizenz mit Seriennummer (AAAA-BBBBBB-CCCCCC-DDDD) oder eine Lizenz, die mit einem Lizenzserver funktioniert (ELC – „Enterprise License Center“ / auch „Embarcadero License Center“ genannt)
Lizenz als Seriennummer AAAA-BBBBBB-CCCCCC-DDDD
Hat man eine Seriennummer, muss man zwei Dinge beachten: Zuerst muss man seine erworbene Neulizenz (zB 10.4 Sydney) „pseudo“-aktivieren (oder man installiert und aktiviert die Lizenz mit der erworbenen Seriennummer: Dann weiß unser System direkt, was Sache ist). Wenn man die aktuelle Version nicht installieren kann/will: Dann geht man auf https://license.embarcadero.com und gibt die Seriennummer ein. Als Registrierungscode wählt man die „123456“. Man muss sich dazu einloggen (mit einem Developer Network Account; hat man keinen oder kennt das Kennwort nicht mehr: https://my.embarcadero.com). Damit „weiß“ unser System, daß die Seriennummer mit dem Account von „Max.Mustermann“ verknüpft ist. Jetzt kann man Seriennummern und Downloadlinks für die Vorgängerprodukte unter https://www.embarcadero.com/de/products/rad-studio/previous-versions anfordern. Tipp: Fordern Sie gleich alle möglichen Vorgängerprodukte mit Seriennummer an! Die werden dann an die hinterlegte eMail-Adresse des Developer Network Accounts geschickt.
Einige Bemerkungen dazu (nach/beim Anfordern der Vorgängerversionen)
Eventuell kommen Nachrichten, daß man man Produkt XY schon in der Version Z besitzt (also im Account als aktiviert hinterlegt sind). Diese sind dann abzuwählen, damit die restlichen angefordert werden können
Die Seriennummern werden (inklusive Downloadlinks) an die hinterlegte eMail Adresse geschickt. Nach dem Anfordern sieht man im Webbrowser auch nochmals, zusätzlich die direkten Downloadlinks. Kann man getrost ignorieren (wenn die ganzen eMails reingekommen sind)
Die Anforderung von Altversionen muss innerhalb von 180 Tagen nach Kauf geschehen!
Lizenz in Verbindung mit einem Lizenzserver ELC
Arbeitet man mit einem Lizenzserver, so hat man in der „Order Confirmation“ eMail von Embarcadero einen Link (https://license.embarcadero.com/srs6/prev_rad_versions_installation.jsp), wo man seine Zertifikatsnummer (License Certificate Number: XXXXXX) eingibt und eine Liste von Seriennummer erhält, die ausschliesslich für die Installation genutzt werden.
Die anschliessende Aktivierung / Freischaltung geht dann bei Lizenzen über den Lizenzserver durch das Importieren der SLIP. Hier am Beispiel von C++Builder 2009:
Installation mit „generischer“ Seriennummer:
Beim ersten Start meldet das „Altprodukt“ dann, daß es nicht lizensiert sei. Hier wählt man dann die SLIP Datei vom Lizenzserver:
C++Builder startet dann normal (in Verbindung mit dem Lizenzserver)
Auch hier einige Bemerkungen:
Delphi/C++Builder (2006 bis 2010) haben in den alten Versionen ein Problem mit einem speziellen Font, der im TEMP-Verzeichnis gespeichert wird. Lösung / Workaround: https://sourceforge.net/projects/dzeditorlineendsfix/
Die generische Seriennummer, ich wiederhole es gerne, ist nur und ausschliesslich zu Installation da. Diese lässt sich nicht aktivieren oder sonst wie verwenden.
type
TFormMain = class(TForm)
TimerEnableMetricSettings : TTimer;
procedure FormCreate(Sender: TObject);
procedure FormDestroy(Sender: TObject);
procedure TimerEnableMetricSettingsOnTimer(Sender: TObject);
private
FMethodWnd: HWND;
procedure WTS_SessionWndProc(var Message: TMessage);
procedure DoHandleRDPLock;
procedure DoHandleRDPUnLock;
end;
var
FormMain: TFormMain;
implementation
{$R *.dfm}
procedure TFormMain.DoHandleRDPLock;
begin
// Prevent the VCL App reacting to WM_SETTINGCHANGE when a rdp session is locked/disconnected.
// Stop the timer if it's already running
if TimerEnableMetricSettings.Enabled then
TimerEnableMetricSettings.Enabled := False;
Application.UpdateMetricSettings := False;
end;
procedure TFormMain.DoHandleRDPUnLock;
begin
// Stop the timer if it's already running
if TimerEnableMetricSettings.Enabled then
TimerEnableMetricSettings.Enabled := False;
// Re-start the timer.
TimerEnableMetricSettings.Enabled := True;
end;
procedure TFormMain.FormCreate(Sender: TObject);
begin
TimerEnableMetricSettings.Interval := 30000;
TimerEnableMetricSettings.Enabled := False;
// This hooks to the method WTS_SessionWndProc below to control the Lock
FMethodWnd := AllocateHWnd(WTS_SessionWndProc);
WTSRegisterSessionNotification(FMethodWnd, NOTIFY_FOR_THIS_SESSION);
end;
procedure TFormMain.FormDestroy(Sender: TObject);
begin
if FMethodWnd 0 then
begin
WTSUnRegisterSessionNotification(FMethodWnd);
DeallocateHWnd(FMethodWnd);
end;
end;
procedure TFormMain.WTS_SessionWndProc(var Message: TMessage);
begin
if Message.Msg = WM_WTSSESSION_CHANGE then
begin
case Message.wParam of
WTS_SESSION_LOCK,
WTS_REMOTE_DISCONNECT: DoHandleRDPLock;
WTS_REMOTE_CONNECT,
WTS_SESSION_UNLOCK: DoHandleRDPUnLock;
end;
end;
Message.Result := DefWindowProc(FMethodWnd, Message.Msg, Message.WParam, Message.LParam);
end;
procedure TFormMain.TimerEnableMetricSettingsOnTimer(Sender : TObject);
begin
// stop the timer
TimerEnableMetricSettings.Enabled := False;
// it is recommended to wait a few seconds before this is run
// hence setting the timer interval to 30000 in FormCreate
Application.UpdateMetricSettings := True;
end;
end.
今すぐ10.4.2を使い始めよう
すでに、10.4.2のトライアル版が利用可能になっており、今後製品を購入いただくと、10.4.2をダウンロードいただけるようになります。また、すでに製品をお持ちの方は、有効なアップデートサブスクリプションがあれば、既存のライセンスを使用してRAD Studio 10.4.2をご利用いただけます。10.4.2のダウンロードは、新しいカスタマーポータルサイト(my.embarcadero.com)から行えます。
Data visualization is one of the essential parts of most applications. Without visual or graphical representation for the data is somehow boring. With charts, diagrams and maps you can easily understand the data by just looking at it.
Creating a graphical representation of the data in Delphi is very easy because of built-in components. For instance, the TeeChart component library has 2D and 3D visual components to make your data easier to get.
Since Delphi and C++ Builder community is creating a better ecosystem for RAD developers, we see different solutions for exact cases. For example, there is the TMS FNC Chart cross-platform chart component designed for business, statistical, financial & scientific data.
Ein Feature-Update für RAD Studio 10.4.2, das auf einer Analyse basiert, die zeigt, dass die Anzahl der Entwickler, die Remotedesktop für die Entwicklung während Covid verwenden, gestiegen ist, war eine Beschleunigung des IDE-Renderings über Remotedesktop.
Die Hauptprobleme waren das Einfrieren der IDE in bestimmten Situationen (z. B. beim Verbinden oder Trennen des Remotedesktops), das Flackern und einige AV-Geräte.
Die für 10.4.2 betrachteten QP-Elemente umfassen:
Verbinden Sie eine vorhandene RDP-Sitzung erneut mit denselben Bildschirmeinstellungen (demselben Computer) RS-99048
Verbinden Sie eine vorhandene RDP-Sitzung erneut mit verschiedenen Bildschirmeinstellungen (z. B. von einem anderen Computer) RS-103339
Das erneute Verbinden einer vorhandenen RDP-Sitzung mit geöffnetem FMX-Designer führt zu AV.
Darüber hinaus gab es eine Reihe interner Berichte.
Ich habe zwar kein Beispielprojekt, das geteilt werden kann, aber ich habe die Erlaubnis, einige Notizen zu teilen, die das Forschungs- und Entwicklungsteam von Embarcadero aufgrund ihrer Erfahrungen zur Verfügung gestellt hat, und ich hoffe, dass dies auch anderen Entwicklern nützlich sein wird.
Die Hauptursache für all diese Probleme ist, dass jede Änderung der RDP-Sitzung (Sperren, Entsperren, Verbinden, Trennen) eine systemweite Einstellungsänderung (WM_SETTINGCHANGE) gesendet hat, die eine Nachrichtenkaskade verursacht, die zu mehreren Neuzeichnungen in der IDE führt. Dies war die Ursache für einige AVs, da zwischen den vom Betriebssystem gesendeten Kaskadennachrichten die Nachricht WM_THEMECHANGED enthalten war, die bei einigen Steuerelementen die Neuerstellung des Handles auslöste. Dies wirkte sich auf die VCL / FMX-Designer aus, als sie offen gelassen wurden und eine Sitzung über RDP erneut verbunden wurde.
Die WTS-API bietet eine Möglichkeit, Benachrichtigungen über RDP-Sitzungsänderungen (WM_WTSSESSION_CHANGE) zu erhalten. Wenn Sie dies verwalten, wird die IDE benachrichtigt, wenn die Sitzung gesperrt, entsperrt, verbunden oder getrennt wird. Von hier aus können Sie auswählen, wie mit WM_SETTINGCHANGE umgegangen werden soll, und Probleme mit dem Flackern / Neulackieren vermeiden.
Ein Hinweis des Forschungs- und Entwicklungsteams war, dass die Verwendung von VCL-Stilen für Terminaldienste unter normalen Umständen eher dazu führt, dass eine Anwendung flackert.
Dieses Skelett-Codebeispiel (ungetestet) sollte hoffentlich Hinweise in die richtige Richtung für alle liefern, die ihren Anwendungen ähnliche Unterstützung hinzufügen möchten.
type
TFormMain = class(TForm)
TimerEnableMetricSettings : TTimer;
procedure FormCreate(Sender: TObject);
procedure FormDestroy(Sender: TObject);
procedure TimerEnableMetricSettingsOnTimer(Sender: TObject);
private
FMethodWnd: HWND;
procedure WTS_SessionWndProc(var Message: TMessage);
procedure DoHandleRDPLock;
procedure DoHandleRDPUnLock;
end;
var
FormMain: TFormMain;
implementation
{$R *.dfm}
procedure TFormMain.DoHandleRDPLock;
begin
// Prevent the VCL App reacting to WM_SETTINGCHANGE when a rdp session is locked/disconnected.
// Stop the timer if it's already running
if TimerEnableMetricSettings.Enabled then
TimerEnableMetricSettings.Enabled := False;
Application.UpdateMetricSettings := False;
end;
procedure TFormMain.DoHandleRDPUnLock;
begin
// Stop the timer if it's already running
if TimerEnableMetricSettings.Enabled then
TimerEnableMetricSettings.Enabled := False;
// Re-start the timer.
TimerEnableMetricSettings.Enabled := True;
end;
procedure TFormMain.FormCreate(Sender: TObject);
begin
TimerEnableMetricSettings.Interval := 30000;
TimerEnableMetricSettings.Enabled := False;
// This hooks to the method WTS_SessionWndProc below to control the Lock
FMethodWnd := AllocateHWnd(WTS_SessionWndProc);
WTSRegisterSessionNotification(FMethodWnd, NOTIFY_FOR_THIS_SESSION);
end;
procedure TFormMain.FormDestroy(Sender: TObject);
begin
if FMethodWnd <> 0 then
begin
WTSUnRegisterSessionNotification(FMethodWnd);
DeallocateHWnd(FMethodWnd);
end;
end;
procedure TFormMain.WTS_SessionWndProc(var Message: TMessage);
begin
if Message.Msg = WM_WTSSESSION_CHANGE then
begin
case Message.wParam of
WTS_SESSION_LOCK,
WTS_REMOTE_DISCONNECT: DoHandleRDPLock;
WTS_REMOTE_CONNECT,
WTS_SESSION_UNLOCK: DoHandleRDPUnLock;
end;
end;
Message.Result := DefWindowProc(FMethodWnd, Message.Msg, Message.WParam, Message.LParam);
end;
procedure TFormMain.TimerEnableMetricSettingsOnTimer(Sender : TObject);
begin
// stop the timer
TimerEnableMetricSettings.Enabled := False;
// it is recommended to wait a few seconds before this is run
// hence setting the timer interval to 30000 in FormCreate
Application.UpdateMetricSettings := True;
end;
end.
Una actualización de funciones de RAD Studio 10.4.2, impulsada por un análisis que muestra un aumento en el número de desarrolladores que utilizan el escritorio remoto para el desarrollo durante Covid, ha sido una aceleración del renderizado IDE sobre Escritorio remoto.
Los principales problemas en los que se centraron fueron el bloqueo del IDE en algunas situaciones (como al conectar o desconectar el escritorio remoto), el parpadeo y algunos AV.
Los elementos de QP analizados para 10.4.2 incluyen:
Vuelva a conectar una sesión RDP existente usando la misma configuración de pantalla (misma máquina) RS-99048
Vuelva a conectar una sesión RDP existente usando diferentes configuraciones de pantalla (por ejemplo, desde una máquina diferente) RS-103339
Vuelva a conectar una sesión RDP existente con el diseñador FMX abierto causa AV.
Además, hubo una serie de informes internos.
Si bien no tengo un proyecto de muestra que se pueda compartir, tengo permiso para compartir algunas notas que el equipo de I + D de Embarcadero proporcionó en función de sus experiencias, y espero que esto también sea útil para otros desarrolladores.
La causa principal de todos esos problemas es que cualquier cambio de sesión de RDP (bloquear, desbloquear, conectar, desconectar) envió un cambio de configuración de todo el sistema (WM_SETTINGCHANGE) que provocó una cascada de mensajes que conduce a múltiples redibujos en el IDE. Esta fue la causa de algunos de los AV, ya que entre los mensajes en cascada enviados por el sistema operativo incluía el mensaje WM_THEMECHANGED que estaba activando la recreación del controlador para algunos controles. Esto estaba afectando a los diseñadores de VCL / FMX cuando se dejaron abiertos y se reconectó una sesión mediante RDP.
La API de WTS proporciona una forma de recibir las notificaciones de cambio de sesión de RDP (WM_WTSSESSION_CHANGE). La gestión de esto permite que el IDE sea notificado cuando la sesión está bloqueada, desbloqueada, conectada, desconectada, y desde aquí podemos elegir cómo se maneja WM_SETTINGCHANGE y evitar los problemas de parpadeo / repintado.
Una nota del equipo de I + D fue que el uso de estilos VCL en servicios de terminal es más probable que haga que una aplicación parpadee en circunstancias normales.
Se espera que esta muestra de código esqueleto (no probado) proporcione indicaciones en la dirección correcta para cualquiera que busque agregar un soporte similar en sus aplicaciones.
type
TFormMain = class(TForm)
TimerEnableMetricSettings : TTimer;
procedure FormCreate(Sender: TObject);
procedure FormDestroy(Sender: TObject);
procedure TimerEnableMetricSettingsOnTimer(Sender: TObject);
private
FMethodWnd: HWND;
procedure WTS_SessionWndProc(var Message: TMessage);
procedure DoHandleRDPLock;
procedure DoHandleRDPUnLock;
end;
var
FormMain: TFormMain;
implementation
{$R *.dfm}
procedure TFormMain.DoHandleRDPLock;
begin
// Prevent the VCL App reacting to WM_SETTINGCHANGE when a rdp session is locked/disconnected.
// Stop the timer if it's already running
if TimerEnableMetricSettings.Enabled then
TimerEnableMetricSettings.Enabled := False;
Application.UpdateMetricSettings := False;
end;
procedure TFormMain.DoHandleRDPUnLock;
begin
// Stop the timer if it's already running
if TimerEnableMetricSettings.Enabled then
TimerEnableMetricSettings.Enabled := False;
// Re-start the timer.
TimerEnableMetricSettings.Enabled := True;
end;
procedure TFormMain.FormCreate(Sender: TObject);
begin
TimerEnableMetricSettings.Interval := 30000;
TimerEnableMetricSettings.Enabled := False;
// This hooks to the method WTS_SessionWndProc below to control the Lock
FMethodWnd := AllocateHWnd(WTS_SessionWndProc);
WTSRegisterSessionNotification(FMethodWnd, NOTIFY_FOR_THIS_SESSION);
end;
procedure TFormMain.FormDestroy(Sender: TObject);
begin
if FMethodWnd <> 0 then
begin
WTSUnRegisterSessionNotification(FMethodWnd);
DeallocateHWnd(FMethodWnd);
end;
end;
procedure TFormMain.WTS_SessionWndProc(var Message: TMessage);
begin
if Message.Msg = WM_WTSSESSION_CHANGE then
begin
case Message.wParam of
WTS_SESSION_LOCK,
WTS_REMOTE_DISCONNECT: DoHandleRDPLock;
WTS_REMOTE_CONNECT,
WTS_SESSION_UNLOCK: DoHandleRDPUnLock;
end;
end;
Message.Result := DefWindowProc(FMethodWnd, Message.Msg, Message.WParam, Message.LParam);
end;
procedure TFormMain.TimerEnableMetricSettingsOnTimer(Sender : TObject);
begin
// stop the timer
TimerEnableMetricSettings.Enabled := False;
// it is recommended to wait a few seconds before this is run
// hence setting the timer interval to 30000 in FormCreate
Application.UpdateMetricSettings := True;
end;
end.
Uma atualização de recurso para RAD Studio 10.4.2, conduzida por análise que mostra um aumento no número de desenvolvedores usando desktop remoto para desenvolvimento durante a Covid, foi a velocidade de renderização IDE sobre Desktop Remoto.
Os principais problemas focados foram o congelamento do IDE em algumas situações (como ao conectar ou desconectar a área de trabalho remota), oscilação e alguns AVs.
Os itens QP analisados para 10.4.2 incluem:
Reconecte uma sessão RDP existente usando as mesmas configurações de tela (mesma máquina) RS-99048
Reconecte uma sessão RDP existente usando configurações de tela diferentes (por exemplo, de uma máquina diferente) RS-103339
Reconectar uma sessão RDP existente com o designer FMX aberto causa AV.
Além disso, havia vários relatórios internos.
Embora eu não tenha um projeto de amostra que possa ser compartilhado, tenho permissão para compartilhar algumas notas que a equipe de P&D da Embarcadero forneceu com base em suas experiências e espero que isso seja útil para outros desenvolvedores também.
A causa raiz de todos esses problemas é que qualquer alteração de sessão RDP (bloquear, desbloquear, conectar, desconectar) enviou uma alteração de configuração de todo o sistema (WM_SETTINGCHANGE) causando uma cascata de mensagens que leva a vários redesenhos no IDE. Esta foi a causa de alguns dos AVs, pois entre as mensagens em cascata enviadas pelo SO incluía a mensagem WM_THEMECHANGED que estava acionando a recriação do identificador para alguns controles. Isso estava afetando os designers VCL / FMX quando eles foram deixados abertos e uma sessão foi reconectada via RDP.
A API WTS fornece uma maneira de receber as notificações de alteração de sessão RDP (WM_WTSSESSION_CHANGE). Gerenciar isso permite que o IDE seja notificado quando a sessão for bloqueada, desbloqueada, conectada, desconectada e, a partir daqui, podemos escolher como o WM_SETTINGCHANGE é tratado e evitar os problemas de oscilação / repintura.
Uma observação da equipe de P&D foi que o uso de estilos VCL em serviços de terminal é mais provável de fazer um aplicativo piscar em circunstâncias normais.
Este exemplo de esqueleto de código (não testado) deve fornecer indicadores na direção certa para qualquer pessoa que queira adicionar suporte semelhante a seus aplicativos.
type
TFormMain = class(TForm)
TimerEnableMetricSettings : TTimer;
procedure FormCreate(Sender: TObject);
procedure FormDestroy(Sender: TObject);
procedure TimerEnableMetricSettingsOnTimer(Sender: TObject);
private
FMethodWnd: HWND;
procedure WTS_SessionWndProc(var Message: TMessage);
procedure DoHandleRDPLock;
procedure DoHandleRDPUnLock;
end;
var
FormMain: TFormMain;
implementation
{$R *.dfm}
procedure TFormMain.DoHandleRDPLock;
begin
// Prevent the VCL App reacting to WM_SETTINGCHANGE when a rdp session is locked/disconnected.
// Stop the timer if it's already running
if TimerEnableMetricSettings.Enabled then
TimerEnableMetricSettings.Enabled := False;
Application.UpdateMetricSettings := False;
end;
procedure TFormMain.DoHandleRDPUnLock;
begin
// Stop the timer if it's already running
if TimerEnableMetricSettings.Enabled then
TimerEnableMetricSettings.Enabled := False;
// Re-start the timer.
TimerEnableMetricSettings.Enabled := True;
end;
procedure TFormMain.FormCreate(Sender: TObject);
begin
TimerEnableMetricSettings.Interval := 30000;
TimerEnableMetricSettings.Enabled := False;
// This hooks to the method WTS_SessionWndProc below to control the Lock
FMethodWnd := AllocateHWnd(WTS_SessionWndProc);
WTSRegisterSessionNotification(FMethodWnd, NOTIFY_FOR_THIS_SESSION);
end;
procedure TFormMain.FormDestroy(Sender: TObject);
begin
if FMethodWnd <> 0 then
begin
WTSUnRegisterSessionNotification(FMethodWnd);
DeallocateHWnd(FMethodWnd);
end;
end;
procedure TFormMain.WTS_SessionWndProc(var Message: TMessage);
begin
if Message.Msg = WM_WTSSESSION_CHANGE then
begin
case Message.wParam of
WTS_SESSION_LOCK,
WTS_REMOTE_DISCONNECT: DoHandleRDPLock;
WTS_REMOTE_CONNECT,
WTS_SESSION_UNLOCK: DoHandleRDPUnLock;
end;
end;
Message.Result := DefWindowProc(FMethodWnd, Message.Msg, Message.WParam, Message.LParam);
end;
procedure TFormMain.TimerEnableMetricSettingsOnTimer(Sender : TObject);
begin
// stop the timer
TimerEnableMetricSettings.Enabled := False;
// it is recommended to wait a few seconds before this is run
// hence setting the timer interval to 30000 in FormCreate
Application.UpdateMetricSettings := True;
end;
end.
Одним из обновлений функции RAD Studio 10.4.2, основанным на анализе, показывающем рост числа разработчиков, использующих удаленный рабочий стол для разработки во время Covid, стало ускорение рендеринга IDE через удаленный рабочий стол.
Основными проблемами были зависание IDE в некоторых ситуациях (например, при подключении или отключении удаленного рабочего стола), мерцание и несколько AV.
Пункты QP, рассмотренные для 10.4.2, включают:
Повторно подключите существующий сеанс RDP, используя те же настройки экрана (тот же компьютер) RS-99048
Повторно подключите существующий сеанс RDP, используя другие настройки экрана (например, с другого компьютера) RS-103339
Повторное подключение существующего сеанса RDP с открытым конструктором FMX вызывает AV.
Кроме того, был ряд внутренних отчетов.
Хотя у меня нет образца проекта, которым можно поделиться, у меня есть разрешение поделиться некоторыми заметками, которые команда разработчиков Embarcadero предоставила на основе своего опыта, и я надеюсь, что это будет полезно и другим разработчикам.
Основная причина всех этих проблем заключается в том, что любое изменение сеанса RDP (блокировка, разблокировка, подключение, отключение) вызывает изменение общесистемных настроек (WM_SETTINGCHANGE), вызывая каскад сообщений, который приводит к множеству перерисовок в среде IDE. Это было причиной некоторых антивирусных программ, поскольку между каскадными сообщениями, отправленными ОС, оно включало сообщение WM_THEMECHANGED, которое запускало воссоздание дескриптора для некоторых элементов управления. Это повлияло на разработчиков VCL / FMX, когда они были оставлены открытыми и сеанс был повторно подключен через RDP.
WTS API предоставляет возможность получать уведомление изменения RDP сессии (WM_WTSSESSION_CHANGE). Управление этим позволяет IDE получать уведомления, когда сеанс заблокирован, разблокирован, подключен, отключен, и отсюда мы можем выбрать, как обрабатывается WM_SETTINGCHANGE, и избежать проблем с мерцанием / перерисовкой.
Одно замечание от команды R&D заключалось в том, что использование стилей VCL в службах терминалов с большей вероятностью заставит приложение видеть мерцание при нормальных обстоятельствах.
Мы надеемся, что этот скелетный образец кода (непроверенный) предоставит указатели в правильном направлении для всех, кто хочет добавить аналогичную поддержку в свои приложения.
type
TFormMain = class(TForm)
TimerEnableMetricSettings : TTimer;
procedure FormCreate(Sender: TObject);
procedure FormDestroy(Sender: TObject);
procedure TimerEnableMetricSettingsOnTimer(Sender: TObject);
private
FMethodWnd: HWND;
procedure WTS_SessionWndProc(var Message: TMessage);
procedure DoHandleRDPLock;
procedure DoHandleRDPUnLock;
end;
var
FormMain: TFormMain;
implementation
{$R *.dfm}
procedure TFormMain.DoHandleRDPLock;
begin
// Prevent the VCL App reacting to WM_SETTINGCHANGE when a rdp session is locked/disconnected.
// Stop the timer if it's already running
if TimerEnableMetricSettings.Enabled then
TimerEnableMetricSettings.Enabled := False;
Application.UpdateMetricSettings := False;
end;
procedure TFormMain.DoHandleRDPUnLock;
begin
// Stop the timer if it's already running
if TimerEnableMetricSettings.Enabled then
TimerEnableMetricSettings.Enabled := False;
// Re-start the timer.
TimerEnableMetricSettings.Enabled := True;
end;
procedure TFormMain.FormCreate(Sender: TObject);
begin
TimerEnableMetricSettings.Interval := 30000;
TimerEnableMetricSettings.Enabled := False;
// This hooks to the method WTS_SessionWndProc below to control the Lock
FMethodWnd := AllocateHWnd(WTS_SessionWndProc);
WTSRegisterSessionNotification(FMethodWnd, NOTIFY_FOR_THIS_SESSION);
end;
procedure TFormMain.FormDestroy(Sender: TObject);
begin
if FMethodWnd <> 0 then
begin
WTSUnRegisterSessionNotification(FMethodWnd);
DeallocateHWnd(FMethodWnd);
end;
end;
procedure TFormMain.WTS_SessionWndProc(var Message: TMessage);
begin
if Message.Msg = WM_WTSSESSION_CHANGE then
begin
case Message.wParam of
WTS_SESSION_LOCK,
WTS_REMOTE_DISCONNECT: DoHandleRDPLock;
WTS_REMOTE_CONNECT,
WTS_SESSION_UNLOCK: DoHandleRDPUnLock;
end;
end;
Message.Result := DefWindowProc(FMethodWnd, Message.Msg, Message.WParam, Message.LParam);
end;
procedure TFormMain.TimerEnableMetricSettingsOnTimer(Sender : TObject);
begin
// stop the timer
TimerEnableMetricSettings.Enabled := False;
// it is recommended to wait a few seconds before this is run
// hence setting the timer interval to 30000 in FormCreate
Application.UpdateMetricSettings := True;
end;
end.
In the olden times, Cryptography was utilized for transforming plain messages or just plaintext to an indecipherable format. And these incomprehensible words were sent to each other and if the receiver knows the algorithm to convert that ciphered text to a readable format. Or there was a special decryption key that could decrypt the messages.
The process of transforming the messages from a plaintext form to a cipher-text form is called Encryption, and the process of transforming the cipher-text to plaintext is called Decryption. I know that many tech guys love to learning Hacking or the real name: Cryptoanalysis. Cryptoanalysis is like learning the designs, schemes of the encryption methods. And by this, you can determine weaknesses in those cipher schemes.
During World War II, the Germans applied a complex electromechanical cipher machine known as the Enigma Machine to maintain the secrecy of military plans. After World War II, the development of digital computers has also brought the development of highly sophisticated and secure cipher systems. Many encryption algorithms were built by scientists throughout the development of computers.
Currently, enabling Cryptography in the systems is easy because of the high-level languages like Delphi. And there are lots of open-source libraries that implement dozens of different encryption algorithms. You just need to add those libraries to your project and you are good to go.
The LockBox is a library for cryptography built for Delphi and C++ Builder. It supports AES, DES, 3DES, SHA, MD5, Blowfish, Twofish, and many popular algorithms. And can be used in Win32, Win64, macOS, iOS, and Android.
procedure TmfmLockboxTests.Button1Click( Sender: TObject);
const sBoolStrs: array[ boolean ] of string = ('Failed','Passed');
var
Ok: boolean;
begin
Codec1.StreamCipherId := 'native.StreamToBlock';
Codec1.BlockCipherId := 'native.AES-192';
Codec1.ChainModeId := 'native.CBC';
Codec1.Password := 'ab';
Ok := Codec1.SelfTest;
Memo1.Lines.Add( Format( '%s self test %s', [ Codec1.Cipher, sBoolStrs[ Ok));
end;
One feature update to RAD Studio 10.4.2, driven by analysis showing an increase in the number of developers using remote desktop to development during Covid, has been a speed up to the IDE rendering over Remote Desktop.
The main issues focused on were the IDE freezing in some situations (such as when connecting or disconnecting remote desktop), flickering, and a few AV’s.
The QP items looked at for 10.4.2 include:
Reconnect an Existing RDP session using the same screen settings (same machine) RS-99048
Reconnect an Existing RDP session using different screen settings (e.g from a different machine) RS-103339
Reconnect an Existing RDP session with the FMX designer opened causes AV.
Additionally, there were a number of internal reports.
While I’ve not got a sample project that can be shared, I have got permission to share a few notes the R&D team at Embarcadero provided based on their experiences, and I hope this will be useful to other developers too.
The root cause for all those issues is that any RDP session change (lock, unlock, connect, disconnect) sent a system-wide setting change (WM_SETTINGCHANGE) causing a message cascade that leads to multiple redraws in the IDE. This was the cause of some of the AV’s as between the cascade messages sent by the OS it included the WM_THEMECHANGED message that was triggering the handle recreation for some controls. This was affecting the VCL/FMX designers when they were left open and a session was reconnected via RDP.
The WTS API provides a way to receive the RDP session change notifications (WM_WTSSESSION_CHANGE). Managing this enables the IDE to become notified when the session is locked, unlocked, connected, disconnected, and from here we can choose how the WM_SETTINGCHANGE is handled and avoid the flickering/repainting issues.
One note from the R&D team was using VCL styles on terminal services is more likely to cause an application to see flicker under normal circumstances.
This skeleton sample of code (untested) should hopefully provide pointers in the right direction for anyone looking to add similar support into their applications.
type
TFormMain = class(TForm)
TimerEnableMetricSettings : TTimer;
procedure FormCreate(Sender: TObject);
procedure FormDestroy(Sender: TObject);
procedure TimerEnableMetricSettingsOnTimer(Sender: TObject);
private
FMethodWnd: HWND;
procedure WTS_SessionWndProc(var Message: TMessage);
procedure DoHandleRDPLock;
procedure DoHandleRDPUnLock;
end;
var
FormMain: TFormMain;
implementation
{$R *.dfm}
procedure TFormMain.DoHandleRDPLock;
begin
// Prevent the VCL App reacting to WM_SETTINGCHANGE when a rdp session is locked/disconnected.
// Stop the timer if it's already running
if TimerEnableMetricSettings.Enabled then
TimerEnableMetricSettings.Enabled := False;
Application.UpdateMetricSettings := False;
end;
procedure TFormMain.DoHandleRDPUnLock;
begin
// Stop the timer if it's already running
if TimerEnableMetricSettings.Enabled then
TimerEnableMetricSettings.Enabled := False;
// Re-start the timer.
TimerEnableMetricSettings.Enabled := True;
end;
procedure TFormMain.FormCreate(Sender: TObject);
begin
TimerEnableMetricSettings.Interval := 30000;
TimerEnableMetricSettings.Enabled := False;
// This hooks to the method WTS_SessionWndProc below to control the Lock
FMethodWnd := AllocateHWnd(WTS_SessionWndProc);
WTSRegisterSessionNotification(FMethodWnd, NOTIFY_FOR_THIS_SESSION);
end;
procedure TFormMain.FormDestroy(Sender: TObject);
begin
if FMethodWnd <> 0 then
begin
WTSUnRegisterSessionNotification(FMethodWnd);
DeallocateHWnd(FMethodWnd);
end;
end;
procedure TFormMain.WTS_SessionWndProc(var Message: TMessage);
begin
if Message.Msg = WM_WTSSESSION_CHANGE then
begin
case Message.wParam of
WTS_SESSION_LOCK,
WTS_REMOTE_DISCONNECT: DoHandleRDPLock;
WTS_REMOTE_CONNECT,
WTS_SESSION_UNLOCK: DoHandleRDPUnLock;
end;
end;
Message.Result := DefWindowProc(FMethodWnd, Message.Msg, Message.WParam, Message.LParam);
end;
procedure TFormMain.TimerEnableMetricSettingsOnTimer(Sender : TObject);
begin
// stop the timer
TimerEnableMetricSettings.Enabled := False;
// it is recommended to wait a few seconds before this is run
// hence setting the timer interval to 30000 in FormCreate
Application.UpdateMetricSettings := True;
end;
end.
Delphi and C++ Builder provides a set of advanced REST components to talk with external APIs. Several sample applications show you how to exchange data with APIs. For example, the Cloud API Test sample has fully functional 5K+ lines of code demo for playing with Microsoft Azure and Amazon Web Services. This means you can connect any web service easily with the built-in components.
TMS FMX Cloud Pack is another option to connect to cloud services quickly. They offer lots of abstractions for developers that we can spend less time building the connection between the cloud service like, YouTube, PayPal, LinkedIn, Microsoft Computer Vision e.t.c
//Initialize the application key and secret for each service
procedure TForm1.InitAppKeys;
begin
TMSFMXCloudFaceBook1.App.Key := FacebookAppkey;
TMSFMXCloudFaceBook1.App.Secret := FacebookAppSecret;
TMSFMXCloudFaceBook1.PersistTokens.Key := GetDocumentsDirectory + '/facebook.ini';
TMSFMXCloudFaceBook1.PersistTokens.Section := 'tokens';
TMSFMXCloudFaceBook1.LoadTokens;
TMSFMXCloudFaceBook1.Tag := integer(csFacebook);
TMSFMXCloudTwitter1.App.Key := TwitterAppkey;
TMSFMXCloudTwitter1.App.Secret := TwitterAppSecret;
TMSFMXCloudTwitter1.PersistTokens.Key := GetDocumentsDirectory + '/twitter.ini';
TMSFMXCloudTwitter1.PersistTokens.Section := 'tokens';
TMSFMXCloudTwitter1.LoadTokens;
TMSFMXCloudTwitter1.Tag := integer(csTwitter);
TMSFMXCloudDropBox1.App.Key := DropBoxAppkey;
TMSFMXCloudDropBox1.App.Secret := DropBoxAppSecret;
TMSFMXCloudDropBox1.PersistTokens.Key := GetDocumentsDirectory + '/dropbox.ini';
TMSFMXCloudDropBox1.PersistTokens.Section := 'tokens';
TMSFMXCloudDropBox1.LoadTokens;
TMSFMXCloudDropBox1.Tag := integer(csDropBox);
TMSFMXCloudGDrive1.App.Key := GoogleAppKey;
TMSFMXCloudGDrive1.App.Secret := GoogleAppSecret;
TMSFMXCloudGDrive1.PersistTokens.Key := GetDocumentsDirectory + '/gdrive.ini';
TMSFMXCloudGDrive1.PersistTokens.Section := 'tokens';
TMSFMXCloudGDrive1.LoadTokens;
TMSFMXCloudGDrive1.Tag := integer(csGDrive);
TMSFMXCloudFlickr1.App.Key := FlickrAppKey;
TMSFMXCloudFlickr1.App.Secret := FlickrAppSecret;
TMSFMXCloudFlickr1.PersistTokens.Key := GetDocumentsDirectory + '/flickr.ini';
TMSFMXCloudFlickr1.PersistTokens.Section := 'tokens';
TMSFMXCloudFlickr1.LoadTokens;
TMSFMXCloudFlickr1.Tag := integer(csFlickr);
end;
//Initializes the status by showing an Ok or Error image next to each service.
procedure TForm937.InitStatus(cs: TCloudServices);
begin
if csFacebook in cs then
begin
svcOKFacebook.Visible := TMSFMXCloudFaceBook1.TestTokens;
svcErrFacebook.Visible := not svcOKFacebook.Visible;
end;
if csTwitter in cs then
begin
svcOKTwitter.Visible := TMSFMXCloudTwitter1.TestTokens;
svcErrTwitter.Visible := not svcOKTwitter.Visible;
end;
if csDropBox in cs then
begin
svcOKDropBox.Visible := TMSFMXCloudDropBox1.TestTokens;
svcErrDropBox.Visible := not svcOKDropBox.Visible;
end;
if csGDrive in cs then
begin
svcOKGDrive.Visible := TMSFMXCloudGDrive1.TestTokens;
svcErrGDrive.Visible := not svcOKGDrive.Visible;
end;
if csFlickr in cs then
begin
svcOKFlickr.Visible := TMSFMXCloudFlickr1.TestTokens;
svcErrFlickr.Visible := not svcOKFlickr.Visible;
end;
end;
TMS FMX Cloud components support cross-platform application development and the component architecture based on original FireMonkey classes.
Here are some of the available components:
Google Calendar, Google Contacts, Windows Live Calendar, Outlook Contacts
Apple CloudKit, DropBox, OneDrive, Google Storage, Google Sheets
With the MiTeC System Information component suite, you can get complex system information in Delphi.
Sometimes I need to know about my system fully and I use some utility applications to find out what kind of components that I have in my system.
Since you are a Delphi developer, you can explore the MiTeC System Information component suite to create something like that or even create and distribute your System Information fetching application.
var
c,i,idx: Integer;
begin
List.Items.BeginUpdate;
try
List.Items.Clear;
if SI.OS.DataAvailable and (SI.OS.OSName<>'') then begin
with List.Items.Add do begin
Caption:='Machine';
if SI.Machine.BIOS.BIOSDataCount>0 then
SubItems.Add(SI.Machine.BIOS.BIOSValue['SystemProductName'].Value)
else
SubItems.Add(SI.Machine.SMBIOS.SystemModel);
end;
with List.Items.Add do begin
Caption:='CPU';
SubItems.Add(Format('%d x %s - %d MHz',[SI.CPU.CPUPhysicalCount,SI.CPU.CPUName,SI.CPU.Frequency]));
end;
with List.Items.Add do begin
Caption:='Memory';
if SI.Machine.SMBIOS.MemoryDeviceCount>0 then begin
c:=0;
idx:=-1;
for i:=0 to SI.Machine.SMBIOS.MemoryDeviceCount-1 do
if SI.Machine.SMBIOS.MemoryDevice[i].Size>0 then begin
Inc(c);
if idx=-1 then
idx:=i;
end;
SubItems.Add(Format('%d x %d MB %s',[c,
SI.Machine.SMBIOS.MemoryDevice[idx].Size,
MemoryDeviceTypes[SI.Machine.SMBIOS.MemoryDevice[idx].Device))
end else
SubItems.Add(Format('%d MB',[SI.Memory.PhysicalTotal shr 20]));
end;
for i:=0 to SI.Display.AdapterCount-1 do
with List.Items.Add do begin
Caption:='Graphics';
if SI.Display.Adapter[i].Memory>0 then
SubItems.Add(Format('%s - %d MB',[SI.Display.Adapter[i].Name,SI.Display.Adapter[i].Memory shr 20]))
else
SubItems.Add(SI.Display.Adapter[i].Name);
end;
with List.Items.Add do begin
Caption:='OS';
SubItems.Add(Format('%s %s',[SI.OS.OSName,SI.OS.OSEdition]));
end;
end else
List.Items.Add.Caption:='No data available';
finally
List.Items.EndUpdate;
end;
end;
Here are some of the components available in the component suite:
TMiTeC_CPU – provides detailed CPU information
TMiTeC_Machine – provides information about the computer or virtual machine, BIOS, TPM
TMiTeC_Security – detects installed AntiViruses, AntiSpyware, and Firewalls
Do you want to rotate a 3D object in a text form?, Do you want to copy your matrixes or tables and paste to excel forms? Do you want to sort .txt files in modern C++? Do you want to learn how to use the ‘optional’ C++ feature? Want to learn more about practical memory pool-based allocators? Please check our LearnCPlusPlus.org website to learn new and more things!
There is a new RADS 10.4.2 release this week. Don’t forget to check the new C++ Builder 10.4.2 release and new features. We will have posts about these features in the next weeks.
Creating an app and starting a business is different than distributing the written software to the world. Globalization is the biggest trend right now in all markets. If you want to stay competitive you should think about localizing your software to each market specifically or at least target big markets.
With RAD Studio (Delphi and C++Builder), we can create solutions really fast but when it comes to software localization, there are different solutions available. The best option is the TsiLang localization component suite. This is a commercial tool and the trial version is available on the GetIt portal.
TsiLang component set includes several highly professional, easy-to-use VCL components, wizards, and tools for developing multi-language and localized applications under RAD Studio.
One of the best features is that there is complete support for FireMonkey and FireUI. With the FireUi you can easily check how your application looks like on a real device.
TsiLang Features:
Switching language on fly at run-time as well as design-time
No external files or databases to create multi-language applications
Winlive Pro Karaokeは、Delphiを用いて開発されたクロスプラットフォームカラオケアプリです。 mp3、mp3+cdg、mp4、midi、kar、jpg、txtなどを読み込み修正できるマルチメディアプレーヤーで、midi、mp3、wavなどのカラオケ音源ファイルを再生したり、midiファイルの楽器を変更したり、ピッチを変更することが可能です。 また、プレイリストの保存や管理も可能。開発者は、次のように解説しています。
I’m going to call RAD Studio Delphi 10.42 “the speedy supermodel release.” So many lovely subtle (and overt) tweaks to the UI and BOY DOES IT COMPILE FAST! It’s solid, contains a bunch of quality improvements, LSP is really kicking it now and the new ‘squiggly line’ choices for error insight and so on just add to the overall feel of solidity.
Moving from Delphi 10.3.3 Rio to 10.4.2 Sydney the compile time for our 2.3 million lines of code dropped from a respectable 2.5 minutes to an incredible 1.5 minutes! This makes the turn-around times for daily work 40% faster!
My recommendation to all Delphi users is: Move to the latest version 10.4.2 Sydney immediately!
Wow. I am really impressed. This is the long-awaited Delphi 7 successor. The new gold standard. Compiling: Superspeed. Working on remote Desktop. Wow. Compilation in 10 seconds instead of 90 seconds. Loading huge forms without trouble.
すでに、10.4.2のトライアル版が利用可能になっており、今後製品を購入いただくと、10.4.2をダウンロードいただけるようになります。また、すでに製品をお持ちの方は、有効なアップデートサブスクリプションがあれば、既存のライセンスを使用してRAD Studio 10.4.2をご利用いただけます。10.4.2のダウンロードは、新しいカスタマーポータルサイト(my.embarcadero.com)から行えます。
A NameList list box containing FirstName and LastNamelist box items containing edit controls providing input fields.
Other four TabItem2, TabItem3, TabItem4, and TabItem5 tab items contain:
List boxes containing three PersonalInfoList, EducationList, and WorkListlist box items and the memo control containing edit and combo box controls providing input controls.
When you run the application, the first TabItem1 is displayed. Before going to any of the next tab items, all fields on the tab must be completed. Otherwise, the Next button is not Enabled.
On pressing the Next or Back buttons on the toolbar tab control, the NextTabAction or
To assign these actions to toolbar buttons, in the Structure view select a button, for example Next ({Code|Button1}}). In the Object Inspector, select the Action node, click the down arrow on the right, and assign New Standard Action > Tab > TNextTabAction to the Action property. To set the TabControl tab control in which to make switching of active tabs, in the Object Inspector, expand the Action node, select the TabControl item, and click the down arrow on the right. From the list, select the TabControl1 tab control.
To enable or disable the Keyboard Toolbar, the SetToolbarEnabled method is used.
The Keyboard Toolbar is implicitly hidden, appearing only when completing information in TEdit objects. Setting the visibility is achieved using the following method: SetHideKeyboardButtonVisibility.
You can refer to the link below for more information about this sample:
Delphi 10.4.2 Sydney ist erschienen und voller neuer Funktionen , Korrekturen und allgemeiner Qualitätsverbesserungen. Ich glaube wirklich, dass es die perfekte Mischung aus Politur und neuen Funktionen ist, und jeder, mit dem ich gesprochen habe, scheint dem zuzustimmen. Eine der herausragenden Funktionen ist die Geschwindigkeitsverbesserung des Delphi-Compilers. Sie sind größtenteils im Win32-Compiler sichtbar und teilweise das Ergebnis der von Andreas Hausladen bereitgestellten Details und der Korrekturen in früheren Versionen seines IDE-Fixpacks .
Hier ist die Liste der Korrekturen, nur für den Fall, dass Sie neugierig waren….
Dateisystem
SearchUnitNameInNS
GetUnitOf
CacheControl
FileNameStringFunctions
KibitzIgnoreErrors
RootTypeUnitList
MapFile.fprintf
Unit.RdName
PrefetchToken
StrLenCalls
(Hinweis: Aufgrund der Art des IDE-Fixpacks unterscheidet sich unsere Implementierung, erreicht jedoch dasselbe Ziel.)
WarnLoadResString
DbkGetFileIndex
UnlinkImports
ResetUnits
KibitzCompilerImplUnitReset
UnlinkDuringCompile
UnitFreeAll
UnitFindByAlias
SymLookupScope
ImportedSymbol
NoUnitDiscardAfterCompile
SourceOutdated
MapFileBuffer
BackgroundCompilerFileExists
DrcFileBuffer
Package.CleanupSpeed
Optimierung
FindPackage
x64.JumpOpt
x64.SymTabHashTable
ReleaseUnusedMemory
FileNameStringFunctions
Memory.Shrink
Meistens wird Delphi sehr schnell kompiliert, und je nach Code werden möglicherweise keine Leistungsverbesserungen festgestellt. Ich habe einige meiner Projekte ausprobiert und keine Änderungen festgestellt. Matthias Eißing schlug vor, dass er eine erhebliche Beschleunigung beim Kompilieren von HeidiSQL sehen würde , also habe ich es ausprobiert und ein Video gemacht.
Zusammenfassend ging die Win32-Kompilierung von 5,5 Sekunden in 10.4.1 Rio auf 3,3 Sekunden in 10.4.2. Das ist eine Geschwindigkeitsverbesserung von 40%.
Einige andere Personen haben die Geschwindigkeitsverbesserungen, die sie beim Umzug nach 10.4.2 Sydney festgestellt haben, geteilt.
Adrian Gallero, Projektmanager bei TMS Software, zeigte das Kompilieren der Millionen Codezeilen hinter TMS FlexCel . Es enthält „viele Generika, etwas mehr als 3000 Einheiten, mehrere Includes, Zyklen von Einheiten, die sich selbst rekursiv verwenden, und komplexe Abhängigkeiten.“ Seine Kompilierungszeit stieg von 30 Sekunden in 10,3 Rio auf 19 Sekunden in 10.4.2 Sydney.
Normalerweise warte ich eine Weile, bevor ich eine neue Delphi-Version einführe, aber angesichts der Zeit, die ich mit dem Kompilieren von FlexCel verbringe, bin ich gestern auf 10.4.2 migriert.
Ich werde RAD Studio Delphi 10.42 als „das schnelle Supermodel-Release“ bezeichnen. So viele schöne subtile (und offensichtliche) Änderungen an der Benutzeroberfläche und BOY DOIL IT COMPILE FAST! Es ist solide, enthält eine Reihe von Qualitätsverbesserungen, LSP ist jetzt wirklich auf dem Vormarsch und die neuen „schnörkellosen“ Optionen für Fehlererkennungen usw. tragen einfach zum allgemeinen Gefühl der Solidität bei.
Beim Wechsel von Delphi 10.3.3 Rio zu 10.4.2 Sydney sank die Kompilierungszeit für unsere 2,3 Millionen Codezeilen von beachtlichen 2,5 Minuten auf unglaubliche 1,5 Minuten! Dadurch verkürzen sich die Durchlaufzeiten für die tägliche Arbeit um 40%!
Meine Empfehlung an alle Delphi-Benutzer lautet: Wechseln Sie sofort zur neuesten Version 10.4.2 Sydney!
Wow. Ich bin wirklich beeindruckt. Dies ist der lang erwartete Nachfolger von Delphi 7. Der neue Goldstandard. Kompilieren: Superspeed. Arbeiten auf dem Remotedesktop. Wow. Kompilierung in 10 Sekunden statt 90 Sekunden. Laden großer Formulare ohne Probleme.
Delphi 10.4.2 Sydney está disponible y está lleno de nuevas características , correcciones y mejoras generales de calidad. Realmente creo que es la combinación perfecta de pulido y nuevas características, y todos con los que he hablado parecen estar de acuerdo. Una de las características más destacadas son las mejoras en la velocidad del compilador de Delphi. En su mayoría son visibles en el compilador de Win32 y son en parte el resultado de los detalles proporcionados por Andreas Hausladen y las correcciones en versiones anteriores de su Fix Pack IDE .
Aquí está la lista de correcciones, en caso de que sienta curiosidad….
Sistema de archivos
SearchUnitNameInNS
GetUnitOf
CacheControl
FileNameStringFunctions
KibitzIgnoreErrors
RootTypeUnitList
MapFile.fprintf
Unit.RdName
PrefetchToken
StrLenCalls
(Nota: debido a la naturaleza de IDE Fix Pack, nuestra implementación es diferente, pero logra el mismo objetivo).
WarnLoadResString
DbkGetFileIndex
Desvincular Importaciones
ResetUnits
KibitzCompilerImplUnitReset
Desvincular durante la compilación
UnitFreeAll
UnitFindByAlias
SymLookupScope
ImportedSymbol
NoUnitDiscardAfterCompile
FuenteOutdated
MapFileBuffer
BackgroundCompilerFileExists
DrcFileBuffer
Package.CleanupSpeed
Mejoramiento
FindPackage
x64.JumpOpt
x64.SymTabHashTable
ReleaseUnusedMemory
FileNameStringFunctions
La memoria se contrae
La mayoría de las veces, Delphi se compila muy rápido y, según el código, es posible que no vea ninguna mejora de rendimiento. Probé algunos de mis proyectos y no vi ningún cambio. Matthias Eißing sugirió que vio una aceleración significativa al compilar HeidiSQL , así que lo intenté e hice un video.
En resumen, la compilación de Win32 pasó de 5,5 segundos en 10.4.1 Rio a 3,3 segundos en 10.4.2. Esa es una mejora de la velocidad del 40%.
Algunas otras personas han compartido las mejoras de velocidad que descubrieron al pasar a 10.4.2 Sydney.
Adrian Gallero, gerente de proyectos de TMS Software, mostró la compilación del millón de líneas de código detrás de TMS FlexCel . Contiene “muchos genéricos, un poco más de 3000 unidades, múltiples inclusiones, ciclos de unidades que se utilizan de forma recursiva y dependencias complejas”. Su tiempo de compilación pasó de 30 segundos en 10.3 Rio a 19 segundos en 10.4.2 Sydney.
I normally wait a while before adopting a new Delphi version, but given all the time I spend compiling FlexCel, I migrated to 10.4.2 yesterday.
I’m going to call RAD Studio Delphi 10.42 “the speedy supermodel release.” So many lovely subtle (and overt) tweaks to the UI and BOY DOES IT COMPILE FAST! It’s solid, contains a bunch of quality improvements, LSP is really kicking it now and the new ‘squiggly line’ choices for error insight and so on just add to the overall feel of solidity.
Moving from Delphi 10.3.3 Rio to 10.4.2 Sydney the compile time for our 2.3 million lines of code dropped from a respectable 2.5 minutes to an incredible 1.5 minutes! This makes the turn-around times for daily work 40% faster!
My recommendation to all Delphi users is: Move to the latest version 10.4.2 Sydney immediately!
Wow. I am really impressed. This is the long-awaited Delphi 7 successor. The new gold standard. Compiling: Superspeed. Working on remote Desktop. Wow. Compilation in 10 seconds instead of 90 seconds. Loading huge forms without trouble.
O Delphi 10.4.2 Sydney foi lançado e está cheio de novos recursos , correções e melhorias gerais de qualidade. Eu realmente acredito que é a mistura perfeita de polimento e novos recursos, e todos com quem conversei parecem concordar. Um dos recursos que se destacam são as melhorias na velocidade do compilador Delphi. Eles são principalmente visíveis no compilador Win32 e são parcialmente o resultado dos detalhes fornecidos por Andreas Hausladen e das correções em versões anteriores de seu IDE Fix Pack .
Aqui está a lista de correções, caso você esteja curioso….
Sistema de arquivo
SearchUnitNameInNS
GetUnitOf
CacheControl
FileNameStringFunctions
KibitzIgnoreErrors
RootTypeUnitList
MapFile.fprintf
Unit.RdName
PrefetchToken
StrLenCalls
(Nota: Devido à natureza do IDE Fix Pack, nossa implementação é diferente, mas atinge o mesmo objetivo.)
WarnLoadResString
DbkGetFileIndex
UnlinkImports
ResetUnits
KibitzCompilerImplUnitReset
UnlinkDuringCompile
UnitFreeAll
UnitFindByAlias
SymLookupScope
ImportedSymbol
NoUnitDiscardAfterCompile
SourceOutdated
MapFileBuffer
BackgroundCompilerFileExists
DrcFileBuffer
Package.CleanupSpeed
Otimização
FindPackage
x64.JumpOpt
x64.SymTabHashTable
ReleaseUnusedMemory
FileNameStringFunctions
Memory.Shrink
Na maioria das vezes, o Delphi compila muito rapidamente e, dependendo do seu código, você pode não ver nenhuma melhoria de desempenho. Tentei alguns dos meus projetos e não vi nenhuma mudança. Matthias Eißing sugeriu que viu uma aceleração significativa ao compilar o HeidiSQL , então tentei e fiz um vídeo.
Em resumo, a compilação do Win32 passou de 5,5 segundos no 10.4.1 Rio para 3,3 segundos no 10.4.2. Isso é uma melhoria de velocidade de 40%.
Algumas outras pessoas compartilharam as melhorias de velocidade que descobriram ao mudar para 10.4.2 Sydney.
Adrian Gallero, gerente de projeto da TMS Software, mostrou a compilação das milhões de linhas de código por trás do TMS FlexCel . Ele contém “muitos genéricos, um pouco mais de 3.000 unidades, vários includes, ciclos de unidades que se usam recursivamente e dependências complexas”. Seu tempo de compilação passou de 30 segundos no 10.3 Rio para 19 segundos no 10.4.2 Sydney.
I normally wait a while before adopting a new Delphi version, but given all the time I spend compiling FlexCel, I migrated to 10.4.2 yesterday.
I’m going to call RAD Studio Delphi 10.42 “the speedy supermodel release.” So many lovely subtle (and overt) tweaks to the UI and BOY DOES IT COMPILE FAST! It’s solid, contains a bunch of quality improvements, LSP is really kicking it now and the new ‘squiggly line’ choices for error insight and so on just add to the overall feel of solidity.
Moving from Delphi 10.3.3 Rio to 10.4.2 Sydney the compile time for our 2.3 million lines of code dropped from a respectable 2.5 minutes to an incredible 1.5 minutes! This makes the turn-around times for daily work 40% faster!
My recommendation to all Delphi users is: Move to the latest version 10.4.2 Sydney immediately!
Wow. I am really impressed. This is the long-awaited Delphi 7 successor. The new gold standard. Compiling: Superspeed. Working on remote Desktop. Wow. Compilation in 10 seconds instead of 90 seconds. Loading huge forms without trouble.
Вышла версия Delphi 10.4.2 Sydney , полная новых функций , исправлений и общих улучшений качества. Я действительно считаю, что это идеальное сочетание доработки и новых функций, и все, с кем я разговаривал, похоже, согласны. Одна из выдающихся особенностей — повышение скорости компилятора Delphi. Они в основном видны в компиляторе Win32 и частично являются результатом подробностей, предоставленных Андреасом Хаусладеном, и исправлений в предыдущих версиях его пакета исправлений IDE .
Вот список исправлений, на всякий случай, если вам было интересно….
Файловая система
SearchUnitNameInNS
GetUnitOf
CacheControl
FileNameStringFunctions
KibitzIgnoreErrors
RootTypeUnitList
MapFile.fprintf
Unit.RdName
PrefetchToken
StrLenCalls
(Примечание: из-за характера пакета исправлений IDE наша реализация отличается, но выполняет ту же цель.)
WarnLoadResString
DbkGetFileIndex
UnlinkImports
ResetUnits
KibitzCompilerImplUnitReset
Отменить связь во время компиляции
UnitFreeAll
UnitFindByAlias
SymLookupScope
ImportedSymbol
NoUnitDiscardAfterCompile
ИсточникУстаревший
MapFileBuffer
BackgroundCompilerFileExists
DrcFileBuffer
Package.CleanupSpeed
Оптимизация
FindPackage
x64.JumpOpt
x64.SymTabHashTable
ReleaseUnusedMemory
FileNameStringFunctions
Память. Сжатие
В большинстве случаев Delphi компилируется очень быстро, и в зависимости от вашего кода вы можете не увидеть никаких улучшений производительности. Я пробовал несколько своих проектов и не заметил никаких изменений. Маттиас Эйссинг предположил, что он увидел значительное ускорение компиляции HeidiSQL , поэтому я попробовал и снял видео.
Таким образом, время компиляции Win32 увеличилось с 5,5 секунды в 10.4.1 Rio до 3,3 секунды в 10.4.2. Это улучшение скорости на 40%.
Еще несколько человек поделились улучшением скорости, которое они обнаружили при переходе на 10.4.2 Sydney.
Адриан Галлеро, менеджер проектов в TMS Software, продемонстрировал компиляцию миллиона строк кода, лежащую в основе TMS FlexCel . Он содержит «множество универсальных шаблонов, немногим более 3000 единиц, несколько включений, циклы модулей, которые рекурсивно используют себя, и сложные зависимости». Его время компиляции увеличилось с 30 секунд в 10.3 Rio до 19 секунд в 10.4.2 Sydney.
I normally wait a while before adopting a new Delphi version, but given all the time I spend compiling FlexCel, I migrated to 10.4.2 yesterday.
I’m going to call RAD Studio Delphi 10.42 «the speedy supermodel release.» So many lovely subtle (and overt) tweaks to the UI and BOY DOES IT COMPILE FAST! It’s solid, contains a bunch of quality improvements, LSP is really kicking it now and the new ‘squiggly line’ choices for error insight and so on just add to the overall feel of solidity.
Moving from Delphi 10.3.3 Rio to 10.4.2 Sydney the compile time for our 2.3 million lines of code dropped from a respectable 2.5 minutes to an incredible 1.5 minutes! This makes the turn-around times for daily work 40% faster!
My recommendation to all Delphi users is: Move to the latest version 10.4.2 Sydney immediately!
Wow. I am really impressed. This is the long-awaited Delphi 7 successor. The new gold standard. Compiling: Superspeed. Working on remote Desktop. Wow. Compilation in 10 seconds instead of 90 seconds. Loading huge forms without trouble.
You can find Delphi code samples in GitHub Repositories. Search by name into the samples repositories according to your RAD Studio version.
How to Use the Sample
Navigate to the location given above and open MappingColumns.dproj.
Press F9 or choose Run > Run.
Click on the Use Connection Definition combo box and select an option.
Files
File in Delphi
Contains
MappingColumns.dproj MappingColumns.dpr
The project itself.
fMappingColumns.pas fMappingColumns.fmx
The main form.
Implementation
To set up the columns mapping, the sample implements the following steps.
Create table adapter
var
oAdapt: IFDDAptTableAdapter;
<em>// ...</em>
begin
<em>// create table adapter</em>
FDCreateInterface(IFDDAptTableAdapter, oAdapt);
Assign command
var
oComm: IFDPhysCommand;
<em>// ...</em>
begin
<em>// ...</em>
with oAdapt do begin
FConnIntf.CreateCommand(oComm);
SelectCommand := oComm;
SelectCommand.Prepare('select * from {id FDQA_map1}');
Set source result set name
SourceRecordSetName := EncodeName('FDQA_map1');
Set the DatSTable name
Set the DatSTable name where the rows are fetched.
Delphi 10.4.2 Sydney is out, and it is full of new features, fixes, and general quality improvements. I really do believe it is the perfect mix of polish and new features, and everyone I’ve talked to seems to agree. One of the stand-out features is the Delphi compiler speed improvements. There are mostly visible in the Win32 compiler and are partially the result of the details provided by Andreas Hausladen and the fixes in previous versions of his IDE Fix Pack.
Here is the list of fixes, just in case you were curious….
FileSystem
SearchUnitNameInNS
GetUnitOf
CacheControl
FileNameStringFunctions
KibitzIgnoreErrors
RootTypeUnitList
MapFile.fprintf
Unit.RdName
PrefetchToken
StrLenCalls
(Note: Because of the nature of IDE Fix Pack, our implementation is different, but accomplishes the same goal.)
WarnLoadResString
DbkGetFileIndex
UnlinkImports
ResetUnits
KibitzCompilerImplUnitReset
UnlinkDuringCompile
UnitFreeAll
UnitFindByAlias
SymLookupScope
ImportedSymbol
NoUnitDiscardAfterCompile
SourceOutdated
MapFileBuffer
BackgroundCompilerFileExists
DrcFileBuffer
Package.CleanupSpeed
Optimization
FindPackage
x64.JumpOpt
x64.SymTabHashTable
ReleaseUnusedMemory
FileNameStringFunctions
Memory.Shrink
Most of the time Delphi compiles really quickly, and depending on your code you may not see any performance improvements. I’ve tried some of my projects and didn’t see any changes. Matthias Eißing suggested he saw a significant speed-up compiling HeidiSQL, so I gave it a shot and made a video.
In summary the Win32 compile went from 5.5 seconds in 10.4.1 Rio to 3.3 seconds in 10.4.2. That is a 40% speed improvement.
A few other people have shared the speed improvements they discovered moving to 10.4.2 Sydney.
Adrian Gallero, project manager at TMS Software, showed compiling the million lines of code behind TMS FlexCel. It contains “lots of generics, a little more than 3000 units, multiple includes, cycles of units that use themselves recursively, and complex dependencies.” His compile-time went from 30 seconds in 10.3 Rio to 19 seconds in 10.4.2 Sydney.
I normally wait a while before adopting a new Delphi version, but given all the time I spend compiling FlexCel, I migrated to 10.4.2 yesterday.
I’m going to call RAD Studio Delphi 10.42 “the speedy supermodel release.” So many lovely subtle (and overt) tweaks to the UI and BOY DOES IT COMPILE FAST! It’s solid, contains a bunch of quality improvements, LSP is really kicking it now and the new ‘squiggly line’ choices for error insight and so on just add to the overall feel of solidity.
Moving from Delphi 10.3.3 Rio to 10.4.2 Sydney the compile time for our 2.3 million lines of code dropped from a respectable 2.5 minutes to an incredible 1.5 minutes! This makes the turn-around times for daily work 40% faster!
My recommendation to all Delphi users is: Move to the latest version 10.4.2 Sydney immediately!
Wow. I am really impressed. This is the long-awaited Delphi 7 successor. The new gold standard. Compiling: Superspeed. Working on remote Desktop. Wow. Compilation in 10 seconds instead of 90 seconds. Loading huge forms without trouble.
Delphi VCL ecosystem is so huge and because of it, developing desktop applications is so easy. In one sentence we can say: Delphi VCL is a killer in visual development solution for Windows.
Until now, we have explored so many useful and complex VCL and FMX libraries and frameworks:
Today, we will learn about the ICS for VCL. ICS stands for Internet Component Suite – which is a freeware with full source code for Delphi and C++ Builder VCL developers.
These internet components support most of the major communication protocols and are fully event-driven and non-blocking. Furthermore includes OpenSSL.
Here is the ICS component list from the ICS website:
ICS Component
Description
TWSocket
Basic winsock component. Fully event-driven and multi-thread safe. It supports TCP, UDP, SOCKS5 and can be used to build both client and server programs. Option: SSL support.
TWSocketServer
A TWSocket derived component for multi-user server handling. Option: SSL support.
TSmtpCli
SMTP client protocol support. Used to send mail and attached files to a mail server.
TPop3Cli
POP3 client protocol support. Used to retrieve mail form a mail server.
TDnsQuery
DNS query component is used to retrieve MX records (Mail Exchange, needed for most SMTP applications) from DNS, as well as A records (IP address from hostname) and PTR records (hostname from IP address) records.
TMimeDecode
Supports MIME decoding (file attachments). Useful with the TPop3Cli component.
TFtpCli
FTP client protocol support. Used to send and receive files to/from an FTP server. Also able to do directory and file handling.
TFtpSrv
FTP server protocol support. This component will make your application a full featured FTP server. Beta version.
TNntpCli
NNTP client protocol support. Used to read and post news to/from a newsgroup server.
THttpCli
HTTP client protocol support. Used to access any WEB server for getting or posting data. Base component to build a web browser. Includes Proxy support. Option: HTTPS support (Secure SSL communication).
THttpSrv
HTTP server protocol support. Used to build a web server or to add a browser interface to your application. Option: HTTPS support (Secure SSL communication).
TTnCnx
TELNET client protocol support.
TEmulVT
ANSI terminal emulation (like a TMemo but with ANSI escape sequences interpretation).
TTnEmulVT
TELNET and ANSI terminal emulation combined into a single component. You can build a full telnet client program in only a few lines of code.
TTnScript
TELNET scripting component. Used to automate work with telnet session (such as auto login and password).
TFingerCli
FINGER client. Use it to retreive information about logged user connected to a Unix machine (or any other with a finger server).
TPing
ICMP Ping support. You can Ping a host and get the resulting info.
When you install the components you get several useful sample applications that help to show all the functionalities of the components.
When you start building your projects, you would like to start with one UI component set to target Android, iOS, macOS, Windows. The standard components provide enough tools to make any type of application but, sometimes you need a cross-platform UI component set. And the solution is the TMS FNC UI Pack. TMS FNC UI Pack includes optimized and advanced grid, planner, treeview, rich editor, and diverse components.
TMS FNC Grid component is one of the feature-rich & powerful grid components to make responsive and data-oriented systems. It offers:
Autosizing columns / rows on double-click
Highly configurable and flexible grid
Cells with support for HTML formatted text, like hyperlinks
Filtering and grouping features
Highly optimized for iOS and Android
HTML and PDF export options
Excel import
And more
You can easily download and install the TMS FNC UI Pack from the GetIt Package Manager. And you will get samples and complete documentation for all the available components.
A Delphi developer asked how do you change the background color of a FMX (FireMonkey) TEdit ?
As many know, for a VCL TEdit, you could just set the color property of the TEdit, like this:
Edit1.Color := clYellow;
Then at Run-Time, after calling Edit1.Color := clYellow, the Edit1 looks like this:
But, a FMX TEdit does not have a Color property, so how do you do the same for a FMX TEdit?
There are a few options on how you can do this with a FMX Tedit:
1. One way is to use FMX styling, and you can control the properties StyledSettings, StyleLookup and StyleName.
2. Create a new FMX style, using the FireMonkey Style Designer, with a copy of the component and switch at runtime.
3. Right-Click on your Edit1 on your FMX Multi-Device Form, and select “Edit Custom Style“.
Here we will look at this option #3, by using “Edit Custom Style“.
Select your TEdit on your FMX Multi-Device Form, Right-Click and select Edit Custom Style. After you click on Edit Custom Style, you will see your TEdit open in the Style Designer, like this:
This will give you a Structure Pane for your FMX Edit1, that looks like this:
As we see in the Structure Pane above, FireMonkey (FMX) controls are arrangements of a tree composed of sub controls, primitive shapes, and brushes, decorated with effects. These compositions are defined as styles, stored in a style book. The individual elements of a style are internally called resources; because that term has several other meanings, the term style-resource is used for clarity. Styles provide a great deal of customizations without sub classing.
From here, you can customize the FMX TEdit style any way you want.
For example, in your Structure Pane, you see your Edit1Style1 has a background element. You can put a TRectangle inside the TEdit style, and then use the Fill.Color property of the TRectangleto change the background color of your FMX TEdit, following these steps:
On your FMX Form with your Edit1, use the Tool Palate and select and drag a TRectangle onto your Edit1, like this:
Next, position / rearrange the TRectangle inside the TEdit, like this:
Now, in the Structure Pane, if you set the StyleName of the TRectangle, inside your FMX TEDIT to any name, like “Rectangle_background“, then you can use the FindStyleResource to find the linked resource object for the style specified with the name: “Rectangle_background“, like this:
Select the Rectangle1 in the Structure, and use the Object Inspector, and change the property Name = Rectangle_background:
Here is the Style Designer, showing the TRectangle inside the TEdit:
The FireMonkey styles that are provided with the product are saved in .Style files located in default Redist folder: C:Program Files (x86)EmbarcaderoStudio21.0RediststylesFmx or in your C:UsersPublicDocumentsEmbarcaderoStudio21.0Styles folder.
Save your new FireMonkey Style (*.style) in your Styles folder (C:UsersPublicDocumentsEmbarcaderoStudio21.0Styles) with any name, such as FMXEdit_Color.
Now, to use your new Style on your FMX Form, set the StyleName property of your FMX Form to the new FMXEdit_Color style you just created:
Now, at runtime, when you call TRectangle(Edit1.FindStyleResource(‘Rectangle_background’)).Fill.Color := TAlphaColorRec.Yellow, with code like this, for example:
procedure TForm3.Button1Click(Sender: TObject);
var
LFmx: TFmxObject;
LSel: TRectangle; // Uses FMX.Objects;
begin
LFmx := Edit1.FindStyleResource('Rectangle_background');
if Assigned(LFmx) then
begin
LSel := LFmx as TRectangle;
if Assigned(LSel) then
begin
LSel.Fill.Color := TAlphaColorRec.Yellow;
LSel.Stroke.Color := TAlphaColorRec.Red;
LSel.Stroke.Kind := TBrushKind.Solid;
end;
end;
end;
Your FMX TEdit, now gets the Yellow background color, like this:
This example helps to show one possible option on how to customize the FMX styles.
As we mentioned already, FireMonkey (FMX) controls are arrangements of a tree composed of sub controls, primitive shapes, and brushes, decorated with effects.
Note: FMX is multiplatform so you need to make a style for each platform for which the program is intended. For this example, we only created the new Style for the Windows platform:
As we have seen, all controls in FireMonkey (FMX) are style-able via the styling system. This is accomplished by attaching a TStyleBook to the form, and the style is loaded and applied to the form.
Datafile Premier Softwareは、Delphiによって構築された、会計、製造、ロジスティクス向けのモジュールのフルスイート製品で、英国で500社、8500ユーザーに導入されています。高い柔軟性を備え、エンドユーザーによって、データ入力フォームやデータベース構造、レポートのカスタマイズが可能です。パラメータ型フォームの包括的なセットにより、ユーザー企業は自社のビジネスプロセスに合わせてソフトウェアを調整することができます。開発者は、次のように解説しています。
「1985年のファーストリリース以来、 Datafile Accounting / Business Management Softwareによって、英国の何千もの企業が効率性アップと、収益性の向上を図ってきました。Datafileは、ビジネスのあらゆる側面に対応します。
すでに、10.4.2のトライアル版が利用可能になっており、今後製品を購入いただくと、10.4.2をダウンロードいただけるようになります。また、すでに製品をお持ちの方は、有効なアップデートサブスクリプションがあれば、既存のライセンスを使用してRAD Studio 10.4.2をご利用いただけます。10.4.2のダウンロードは、新しいカスタマーポータルサイト(my.embarcadero.com)から行えます。
We keep adding great posts about C++ in our LearnCPlusPlus.org web site and we have some good feedbacks, thank you so much. We keep adding more tutorials, snippets, videos about C++ and some very specific codes. Here are some of our picks from that we post in the last week. We hope you enjoy with them !
Sometimes Developers Want to list or Identify USB devices connected to the machine and perform some actions to the USB devices programmatically. How to enumerate among the USB devices quickly ? Don’t know how to do. Don’t worry. MiTec’s System Information Management Suite’s component helps to enumerate the connected USB devices we will learn how to use use the TMiTec_USB component in this blog post.
Platforms: Windows.
Installation Steps:
You can easily install this Component Suite from GetIt Package Manager. The steps are as follows.
Navigate In RAD Studio IDE->Tools->GetIt Package Manager->select Components in Categories->Components->Trail -MiTec system Information Component Suite 14.3 and click Install Button.
Read the license and Click Agree All. An Information dialog saying ‘Requires a restart of RAD studio at the end of the process. Do you want to proceed? click yes and continue.
It will download the plugin and installs it. Once installed Click Restart now.
How to run the Demo app:
Navigate to the System Information Management Suite trails setup, Demos folder which is installed during Get It installation e.g) C:UsersDocumentsEmbarcaderoStudio21.0CatalogRepositoryMiTeC-14.3DemosDelphi4
Open the USBEnum project in RAD studio 10.4.1 compile and Run the application.
This Demo App shows how to list down the connected USB devices, enumerate among them and access its properties.
Components used in MSIC USBEnum Demo App:
TMiTeC_USB: Enumerates all connected USB devices and their properties.
TMiTec_DeviceMonitor : Catches USB, Bluetooth devices or TV/monitor connection/disconnection, volumes mount/unmount, CD/DVD insert/eject and other events from other devices
TTreeView to list the USB devices nodes and TListView to show the connected USB Device Node properties.
TButton’s to save, refresh and eject selected USB device.
Implementation Details:
An instance is created USB of TMiTeC_USB. Call USB.RefreshData, Add list of USB nodes to the tree node by loop through USB.USBNodeCount. Ensure the USB Devices is connected before adding its property value to the list view.
procedure TForm1.RefreshData;
var
ii,i,j: Integer;
s: string;
pi: PInteger;
r,n,c: TTreeNode;
g: TGUID;
begin
USB.RefreshData;
Caption:=Format('USB Devices (%d connected)',[USB.ConnectedDevices]);
Tree.Items.BeginUpdate;
try
Tree.Items.Clear;
for i:=0 to USB.USBNodeCount-1 do
with USB.USBNodes[i] do begin
s:='';
if ClassGUID.D1=0 then
g:=GUID_DEVCLASS_USB
else
g:=ClassGUID;
SetupDiGetClassImageIndex(spid,g,ii);
case USBClass of
usbHostController: s:=s+Format('%s %d',[ClassNames[Integer(USBClass)],USBDevice.Port]);
usbHub: s:=s+Format('%s (%s)',[USBDevice.USBClassname,ClassNames[Integer(USBClass));
else begin
if USBDevice.ConnectionStatus=1 then begin
if USBClass=usbExternalHub then
s:=s+Format('Port[%d]: %s (%s)',[USBDevice.Port,USBDevice.USBClassname,ClassNames[Integer(USBClass))
else begin
if USBDevice.Product<>'' then
s:=s+Format('Port[%d]: %s',[USBDevice.Port,USBDevice.Product])
else
s:=s+Format('Port[%d]: %s',[USBDevice.Port,USBDevice.USBClassname]);
if IsEqualGUID(g,GUID_DEVCLASS_USB) and (Length(USBDevice.Registry)>0) then begin
g:=USBDevice.Registry[0].DeviceClassGUID;
SetupDiGetClassImageIndex(spid,g,ii);
end;
end;
end else
s:=s+Format('Port[%d]: %s',[USBDevice.Port,ConnectionStates[USBDevice.ConnectionStatus);
end;
end;
r:=FindNode(ParentIndex);
new(pi);
pi^:=i;
n:=Tree.Items.AddChildObject(r,s,pi);
n.ImageIndex:=ii;
n.SelectedIndex:=n.ImageIndex;
if Assigned(r) then
r.Expand(False);
r:=n;
if (USBClass in [usbReserved..usbStorage,usbVendorSpec,usbError]) and (USBDevice.ConnectionStatus=1) then begin
for j:=0 to High(USBDevice.Registry) do begin
g:=USBDevice.Registry[j].DeviceClassGUID;
SetupDiGetClassImageIndex(spid,g,ii);
new(pi);
pi^:=MakeWord(j,i+1);
n:=Tree.Items.AddChildObject(r,USBDevice.Registry[j].DeviceClass,pi);
n.ImageIndex:=ii;
n.SelectedIndex:=n.ImageIndex;
if (USBDevice.Registry[j].Drive<>'') and USBDevice.Registry[j].DriveConnected then begin
new(pi);
pi^:=MakeWord(j,i+1);
c:=Tree.Items.AddChildObject(n,Format('Drive: %s:',[USBDevice.Registry[j].Drive]),pi);
g:=GUID_DEVCLASS_VOLUME;
SetupDiGetClassImageIndex(spid,g,ii);
c.ImageIndex:=ii;
c.SelectedIndex:=c.ImageIndex;
end;
end;
end;
end;
finally
Tree.Items.EndUpdate;
end;
end;
Display the selected USB Device Node properties as shown below.
procedure TForm1.DisplayProps(AIndex: integer);
procedure AddItem(const AProperty,AValue: string);
begin
with List.Items.Add do begin
Caption:=AProperty;
SubItems.Add(AValue);
end;
end;
var
s: string;
begin
List.Items.BeginUpdate;
try
List.Items.Clear;
if AIndex<256 then begin
with USB.USBNodes[AIndex] do
if (USBDevice.ConnectionStatus=1) then begin
if USBDevice.USBClassname='' then
AddItem('Class',ClassNames[Integer(USBClass)])
else
AddItem('Class',USBDevice.USBClassName);
AddItem('Manufacturer',USBDevice.Manufacturer);
if (USBClass in [usbReserved..usbStorage,usbVendorSpec,usbError]) then begin
AddItem('ClassGUID',GUIDToString(ClassGUID));
AddItem('Connection Name',ConnectionName);
AddItem('Serial',USBDevice.Serial);
AddItem('Power consumption',Format('%d mA',[USBDevice.MaxPower]));
case USB.GetDevicePowerState(DeviceInstanceId,Keyname) of
PowerDeviceUnspecified: s:='Unspecified';
PowerDeviceD0: s:='D0';
PowerDeviceD1: s:='D1';
PowerDeviceD2: s:='D2';
PowerDeviceD3: s:='D3';
end;
AddItem('Power state',s);
AddItem('Specification version',Format('%d.%d',[USBDevice.MajorVersion,USBDevice.MinorVersion]));
AddItem('Driver key',Keyname);
AddItem('Last init',DateTimeToStr(TimeStamp));
end;
end;
end else
with USB.USBNodes[Hi(AIndex)-1] do begin
AddItem('Class',USBDevice.Registry[Lo(AIndex)].DeviceClass);
AddItem('Name',USBDevice.Registry[Lo(AIndex)].Name);
AddItem('ClassGUID',GUIDToString(USBDevice.Registry[Lo(AIndex)].DeviceClassGUID));
AddItem('Last init',DateTimeToStr(USBDevice.Registry[Lo(AIndex)].Timestamp));
end;
finally
List.Items.EndUpdate;
end;
end;
On clicking Remove Button, Check whether the selected USB Device Node is eject able using Method IsEjectable and Eject the Device.
Use DeviceMonitor events OnDevicConnect, OnDeviceDisconnect, OnVolumeConnect, OnVolumeDisconnect to identify the device arrival/removal and refresh the tree view accordingly.
procedure TForm1.DeviceMonitorDeviceConnect(Sender: TObject;
DeviceDesc: TDeviceDesc);
begin
if cbxAuto.Checked and SameText(DeviceDesc.GUID,GUIDToString(GUID_DEVINTERFACE_USB_DEVICE)) then
Refreshdata;
end;
procedure TForm1.DeviceMonitorVolumeConnect(Sender: TObject; Drive: Char;
Remote: Boolean);
begin
if cbxAuto.Checked then
Refreshdata;
end;
MiTeC USBEnum Demo
It’s that simple to enumerate USB Devices connected and list its properties for your application. Use this MiTeC component suite and get the job done quickly.
Die Entwicklung mobiler Apps kann in die Erstellung von drei Hauptanwendungstypen unterteilt werden. Native Apps, webbasierte mobile Apps und Hybrid-Apps.
Was sind die Unterschiede zwischen nativen, hybriden und Webanwendungen?
Der einfachste Weg, um den Unterschied zwischen nativen, hybriden und Web-Apps zu beschreiben, ist:
Native Apps sind kompilierte Binärdateien, die auf dem Gerät ausgeführt werden. Sie sind die schnellste und sicherste der drei Optionen.
Webanwendungen werden im Browser gehostet und ausgeführt und erfordern eine Verbindung zum Internet, um zu funktionieren. Web-Apps sind die langsamste Option mit dem geringsten Zugriff auf die Gerätefunktionen.
Hybridanwendungen (wie der Name schon sagt) sind ein bisschen beides – teils native und teils Web-App – und liegen in Bezug auf die Geschwindigkeit in der Mitte.
Ab März 2021 dominieren Android (71,9%) und iOS (27,33%) den Markt. Dies bedeutet, wenn Sie eine mobile Benutzererfahrung / Anwendungen für Android und iOS erstellen möchten, wird eine nahezu vollständige Marktabdeckung erzielt (99,2%).
Was sind native mobile Anwendungen?
Native Anwendungen werden normalerweise für jede Plattform geschrieben und kompiliert, auf der sie ausgeführt werden. Sie bieten die schnellste Leistung und das höchste Maß an Sicherheit, da sie für die Hardware kompiliert und optimiert werden. Durch den vollständigen Zugriff auf die Hardware profitieren sie auch vom vollständigen Zugriff auf Gerätefunktionen wie Biometrie, Kamera, Sensoren usw. Da native Apps Elemente der Systembenutzeroberfläche verwenden, passen sie sich der Benutzererfahrung der Plattform an und erzielen die höchste Akzeptanz Rate bei der Einführung, da sie intuitiver zu bedienen sind. Diese Tatsache wird durch die Tatsache untermauert, dass native Anwendungen die Bestenliste für jeden App Store auf dem Markt dominieren.
Während der native Anwendungsansatz die beste Leistung, Geschwindigkeit und Benutzerfreundlichkeit bietet, zielen Hersteller-Tools wie Xcode (von Apple für iOS) und Android Studio (für Android) nur auf eine einzige Plattform ab. Dies kann Entwicklungszyklen im Voraus länger, komplizierter und letztendlich (scheinbar) teurer machen, aufgrund mehrerer Codebasen, Q & A-Zyklen, Fähigkeiten, die auf dem neuesten Stand gehalten werden müssen usw. und viele Unternehmen haben Hybrid-Apps nach schlechtem Benutzer-Feedback in native Versionen umgeschrieben.
Die anfängliche Einrichtung nativer Apps (und auch einiger Hybrid-Apps) kann länger dauern, insbesondere bei der Bereitstellung über App Stores. Nach der Einrichtung lassen sie sich jedoch relativ schnell aktualisieren (jedoch nicht so schnell wie eine Web-App ohne App Stores).
Die einzige Option auf dem Markt, die vollständig kompilierte True-Native-Anwendungen aus einer Hand bietet, ist Delphi . FireMonkey (FMX) ist seit seiner Einführung über 9 Jahre auf dem Markt und hat sich zu einem hochflexiblen Framework entwickelt, das auf moderner objektorientierter und komponentenbasierter Programmierung basiert, um einen Low-Code-RAD-Ansatz für die mobile Entwicklung und nicht nur für das Targeting zu erreichen Android und iOS, aber auch MacOS, Linux und Windows.
Was sind Hybridanwendungen?
Hybridanwendungen wie die von Sencha, Angular Mobile, React Native, Cordova, Ionic und PhoneGap werden mithilfe von Webtechnologien (HTML5, CSS, JavaScript) erstellt, die in einer nativen Anwendungsshell gehostet werden. Im Wesentlichen handelt es sich um Web-Apps, die lokal auf dem Telefon in einem Micro-Webserver ausgeführt werden.
Ein Vorteil der Entwicklung hybrider Anwendungen besteht darin, dass eine Single-Source-Codebasis auf mehrere Plattformen abzielen kann. Darüber hinaus kann die native Shell die Erweiterung der HTML-Sprache ermöglichen, um einige Teile der Hardware des Telefons zu erreichen. Dies ist jedoch im Vergleich zu dem, was eine native Anwendung erreichen kann, begrenzt. Hybridanwendungen können auch so eingestellt werden, dass sie ohne Live-Verbindung offline ausgeführt werden (sofern dies so konfiguriert ist).
Der Hauptnachteil von Hybridanwendungen besteht darin, dass sie immer noch wie eine Webseite aussehen und sich verhalten können. Beispielsweise können Steuerelemente versehentlich versehentlich in der Benutzeroberfläche mehrfach ausgewählt werden. Sie sind auch für ihre schlechte Speicher- und Prozessoroptimierung bekannt, was sie ressourcenintensiv macht.
Sicherheit ist ebenfalls ein wichtiger Punkt, der berücksichtigt werden muss, da der Quellcode in der Regel in Klartext innerhalb der Anwendungspakete vorliegt. Dies erschwert die Einhaltung der Sicherheitsbestimmungen bei der Arbeit mit Hybrid-Apps erheblich. Die Injektion von bösartigem Code ist eine echte Sorge.
Von all diesen Optionen ist React Native am nächsten an einer nativen Anwendung, die den Zugriff auf die Verwendung einiger nativer Steuerelemente ermöglicht. Es hat eine gute Community und wird von Facebook und anderen unterstützt. Facebook wird jedoch häufig zitiert, dass an Stellen, an denen Funktionseinschränkungen umgangen werden, immer noch reiner nativer Anwendungscode verwendet wird. React Native wird auch nicht für Apps empfohlen, bei denen Sicherheit wirklich wichtig ist (z. B. Finanz-Apps).
Hybridanwendungen finden sich häufig auch in vielen Low-Code-Lösungen wie Lansa , Mendix , Microsoft PowerApps und Appian , in denen der Ansatz in zusätzliche Backend-Systeme integriert ist. Obwohl diese Plattformen für die anfängliche Markteinführung beeindruckend sein können, gibt es immer noch Einschränkungen hinsichtlich der erzielbaren Ergebnisse, die aufgrund der damit verbundenen Preise pro Benutzer durch höhere Betriebskosten gekennzeichnet sind.
Was sind Webanwendungen ?
Web-Apps können auch eine nützliche Möglichkeit sein, Inhalte auf Mobiltelefonen bereitzustellen. Obwohl Web Apps nicht installiert sind (und eine Live-Datenverbindung benötigen), bieten sie die Möglichkeit, schnell zu ändern und zu aktualisieren, was der Benutzer sehen und tun kann. Web-Apps werden über den Browser ausgeführt, sodass die Hauptcomputerleistung remote ausgeführt wird. Dies bedeutet, dass das mobile Gerät nur minimale Leistung und Speicher benötigt, um die Web-App auszuführen.
HTML5 verfügt über einige starke Funktionen, einschließlich lokaler Datenspeicherung, die ein begrenztes Zwischenspeichern von Daten ermöglichen können. Dies ist jedoch nicht genau der Ort, an dem Sie vertrauliche Daten speichern möchten!
Ein Hauptvorteil einer Webanwendung besteht darin, dass Sie eine nahezu 100% ige Marktabdeckung erhalten, auch auf außergewöhnlich mobilen Nischenplattformen.
Was ist die beste Option für die Entwicklung mobiler Apps?
Die Antwort hängt wirklich von Ihren Anforderungen ab!
Wenn Sie das bestmögliche Maß an Sicherheit, Leistung und Benutzerfreundlichkeit in einer Anwendung erreichen und die Flexibilität schätzen möchten, jederzeit alles zu erstellen, was Sie benötigen, sind native Apps der richtige Weg! Die beste Wahl für die native App-Entwicklung für Mobilgeräte ist Delphi aufgrund seines kompilierten Single-Source-Code-Ansatzes.
Wenn Sie nur eingeschränkten Zugriff auf Funktionen mobiler Geräte benötigen und die Datensicherheit kein großes Problem darstellt, sind Hybrid-Apps ein gültiger Ansatz.
Wenn Sie nur schnell auf mehrere Plattformen zugreifen müssen (und keinen Zugriff auf Funktionen für mobile Geräte benötigen) und die Sicherheit keine Sorge ist, können Web-Apps viel bieten. Eine gute Option für die schnelle Entwicklung von Web-Apps ist Sencha Architect (das auch als zusätzliches Tool in der Delphi Architect Edition enthalten ist und eine größere Auswahl zwischen der Entwicklung von Web- und nativen Apps bietet.
Von allen Optionen ist das Beste aus True-Native (Geschwindigkeit, Leistung und Gerätezugriff) und plattformübergreifender Unterstützung in einer einzigen Codebasis (Unterstützung bei der Verwaltung langfristiger Kosten) nur in Delphi zu finden . Während Delphi im Vergleich zu einigen anderen Plattformen als Nischenprodukt angesehen werden kann, können Entwickler (insbesondere diejenigen, die mit C # vertraut sind) das Framework zu einem Bruchteil der Kosten für die Ausführung mehrerer Entwicklungsprojekte und die Aktualisierung mehrerer Fähigkeiten auf den neuesten Stand bringen . Mit mehr als 26 Jahren Markterfahrung und mehr als 9 Jahren Erfahrung mit einem einzigartigen plattformübergreifenden Ansatz ist es anderen in Bereichen wohl um Jahre voraus.
El desarrollo de aplicaciones móviles se puede clasificar en la creación de 3 tipos principales de aplicaciones. Aplicaciones nativas, aplicaciones móviles basadas en la web y aplicaciones híbridas.
¿Cuáles son las diferencias entre las aplicaciones nativas, híbridas y web?
La forma más sencilla de describir la diferencia entre aplicaciones nativas, híbridas y web es:
Las aplicaciones nativas son binarios compilados que se ejecutan en el dispositivo. Son las más rápidas y seguras de las tres opciones.
Las aplicaciones web se alojan y se ejecutan en el navegador y requieren una conexión a Internet para funcionar. Las aplicaciones web son la opción más lenta, con el menor acceso a las funciones del dispositivo.
Las aplicaciones híbridas (como su nombre indica) son un poco de ambas, parte nativa y parte aplicación web, y se ubican en el medio en términos de velocidad.
A marzo de 2021, Android (71,9%) e iOS (27,33%) dominan el mercado. Esto significa que si está buscando crear una experiencia de usuario móvil / las aplicaciones destinadas a Android e iOS proporcionarán una cobertura de mercado casi completa (99,2%).
¿Qué son las aplicaciones móviles nativas?
Las aplicaciones nativas generalmente se escriben y compilan para cada plataforma en la que se ejecutan. Proporcionan el rendimiento más rápido y los niveles más altos de seguridad, ya que se compilan y optimizan para el hardware. Con acceso completo al hardware, también se benefician del acceso completo a las funciones del dispositivo, como datos biométricos, cámara, sensores, etc. Dado que las aplicaciones nativas usan elementos de la interfaz de usuario del sistema, “encajan” con la experiencia del usuario de la plataforma, logrando la mayor adopción. calificar cuando se implementan, ya que son más intuitivos de usar. Este hecho está respaldado por el hecho de que las aplicaciones nativas dominan la clasificación de cada tienda de aplicaciones del mercado.
Si bien el enfoque de aplicación nativa proporciona el mejor rendimiento, velocidad y usabilidad, las herramientas de proveedores como Xcode (de Apple para iOS) y Android Studio (para Android) solo tienen como objetivo una única plataforma. Esto puede hacer que los ciclos de desarrollo sean más largos, más complicados y, en última instancia (aparentemente) más costosos por adelantado, debido a múltiples bases de código, ciclos de preguntas y respuestas, habilidades para mantenerse actualizado, etc. Dicho esto, como se mencionó anteriormente, hay muchas razones para elegir este nativo, y muchas empresas han reescrito aplicaciones híbridas en versiones nativas debido a los comentarios deficientes de los usuarios.
Las aplicaciones nativas (y también algunas aplicaciones híbridas) pueden tardar más en configurarse inicialmente, especialmente si se implementan a través de tiendas de aplicaciones, sin embargo, una vez configuradas, se actualizan relativamente rápido (pero no tan rápido como una aplicación web sin tiendas de aplicaciones).
La única opción en el mercado que ofrece aplicaciones nativas de origen único y completamente compiladas es Delphi . Con más de 9 años en el mercado desde su lanzamiento, FireMonkey (FMX) ha madurado en un marco altamente flexible, construido sobre programación moderna orientada a objetos y basada en componentes, para lograr un enfoque RAD de código bajo para el desarrollo móvil, no solo focalización Android e iOS, pero también macOS, Linux y Windows.
¿Qué son las aplicaciones híbridas?
Las aplicaciones híbridas, como las de Sencha, Angular Mobile, React Native, Cordova, Ionic, PhoneGap, se crean utilizando tecnologías web (HTML5, CSS, JavaScript), alojadas dentro de un shell de aplicación nativo. En esencia, son aplicaciones web que se ejecutan localmente en el teléfono dentro de un micro servidor web.
Una ventaja del desarrollo de aplicaciones híbridas es una base de código de fuente única que puede apuntar a múltiples plataformas. Además, el shell nativo puede permitir que la extensión del lenguaje HTML llegue a algunas partes del hardware del teléfono; sin embargo, esto es limitado en comparación con lo que puede lograr una aplicación nativa. Las aplicaciones híbridas también se pueden configurar para que se ejecuten sin conexión sin una conexión en vivo (si se configura de esa manera).
La principal desventaja de las aplicaciones híbridas es que aún pueden verse y comportarse como una página web. Por ejemplo, los controles pueden realizar una selección múltiple accidentalmente en la interfaz de usuario por accidente. También son bien conocidos por la mala optimización de la memoria y el procesador, lo que los hace intensivos en recursos.
La seguridad también es un punto importante a considerar, ya que el código fuente suele estar en texto sin cifrar dentro de los paquetes de aplicaciones. Esto hace que el cumplimiento de la seguridad cuando se trabaja con aplicaciones híbridas sea mucho más difícil de controlar. La inyección de código malicioso es una preocupación real.
De todas estas opciones, la más cercana a una aplicación nativa es React Native, que logra ofrecer acceso para usar algunos controles nativos. Tiene una buena comunidad y está respaldado por Facebook y otros. Sin embargo, a menudo se dice que Facebook todavía usa código de aplicación nativo puro en lugares para evitar las limitaciones de las funciones. React Native tampoco se recomienda para aplicaciones donde la seguridad es realmente importante (como aplicaciones financieras).
Las aplicaciones híbridas también se encuentran a menudo en muchas soluciones de código bajo como Lansa , Mendix , Microsoft PowerApps y Appian , donde el enfoque tiene integración con sistemas backend adicionales. Si bien estas plataformas pueden ser impresionantes por la velocidad inicial de comercialización, todavía existen restricciones sobre lo que se puede lograr y se clasifican por costos de funcionamiento más altos debido al precio por usuario con el que vienen.
¿Qué son las aplicaciones web ?
Las aplicaciones web también pueden ser una forma útil de entregar contenido a dispositivos móviles. Si bien las aplicaciones web no están instaladas (y deben tener una conexión de datos en vivo), ofrecen la oportunidad de cambiar y actualizar rápidamente lo que el usuario puede ver y hacer. Las aplicaciones web se ejecutan a través del navegador, por lo que la potencia informática principal se ejecuta de forma remota, lo que significa que el dispositivo móvil necesita un mínimo de energía y memoria para ejecutar la aplicación web.
HTML5 tiene algunas capacidades sólidas, incluido el almacenamiento de datos local, que pueden hacer posible el almacenamiento en caché limitado de datos, sin embargo, ¡aquí no es exactamente donde desea almacenar datos confidenciales!
Uno de los principales beneficios de una aplicación web es que puede obtener casi el 100% de cobertura del mercado, incluso en plataformas móviles excepcionalmente específicas.
¿Cuál es la mejor opción para el desarrollo de aplicaciones móviles?
¡La respuesta depende realmente de los requisitos que tenga!
Si necesita alcanzar el mejor nivel de seguridad, rendimiento y usabilidad en una aplicación, y valora la flexibilidad para crear lo que necesite en cualquier momento, ¡las aplicaciones nativas son el camino a seguir! La mejor opción para el desarrollo de aplicaciones nativas para dispositivos móviles es Delphi debido a su enfoque de código fuente único compilado.
Si necesita acceso limitado a las funciones de los dispositivos móviles y la seguridad de los datos no es una preocupación importante, las aplicaciones híbridas son un enfoque válido.
Si solo necesita acceder a varias plataformas rápidamente (y no necesita acceso a las funciones del dispositivo móvil) y la seguridad no es una preocupación, las aplicaciones web tienen el potencial de ofrecer mucho. Una buena opción para desarrollar rápidamente aplicaciones web es Sencha Architect (que también se incluye como una herramienta adicional en la edición Delphi Architect , que ofrece una opción más amplia entre el desarrollo de aplicaciones web y nativas.
De todas las opciones, lo mejor de la compatibilidad nativa (velocidad, rendimiento y acceso a dispositivos) y multiplataforma en una única base de código (que ayuda a administrar los costos a largo plazo) solo se ve en Delphi . Si bien Delphi puede verse como un producto de nicho en comparación con algunas otras plataformas, los desarrolladores (especialmente aquellos familiarizados con C #) pueden actualizar fácilmente el marco a una fracción del costo de ejecutar múltiples proyectos de desarrollo y mantener múltiples conjuntos de habilidades actualizados. . Y con más de 26 años de experiencia en el mercado y más de 9 años ofreciendo un enfoque multiplataforma único, podría decirse que está años por delante de otros en áreas.
O desenvolvimento de aplicativos móveis pode ser categorizado na criação de 3 tipos principais de aplicativos. Aplicativos nativos, aplicativos móveis baseados na web e aplicativos híbridos.
Quais são as diferenças entre aplicativos nativos, híbridos e da web?
A maneira mais simples de descrever a diferença entre aplicativos nativos, híbridos e da web é:
Os aplicativos nativos são binários compilados que são executados no dispositivo. Eles são os mais rápidos e seguros das três opções.
Os aplicativos da Web são hospedados e executados no navegador e requerem uma conexão com a Internet para funcionar. Os aplicativos da Web são a opção mais lenta, com menos acesso aos recursos do dispositivo.
Os aplicativos híbridos (como o nome sugere) são um pouco das duas coisas – parte nativa e parte aplicativo web, e ficam no meio em termos de velocidade.
Em março de 2021, Android (71,9%) e iOS (27,33%) dominavam o mercado. Isso significa que se você está procurando criar uma experiência de usuário / aplicativos móveis voltados para Android e iOS, fornecerá uma cobertura de mercado quase total (99,2%).
O que são aplicativos móveis nativos?
Os aplicativos nativos são normalmente escritos e compilados para cada plataforma em que são executados. Eles fornecem o desempenho mais rápido e os mais altos níveis de segurança à medida que são compilados e otimizados para o hardware. Com acesso total ao hardware, eles também se beneficiam de acesso completo aos recursos do dispositivo, como biometria, câmera, sensores, etc. Como os aplicativos nativos usam elementos de IU do sistema, eles se “encaixam” na experiência do usuário da plataforma, alcançando a maior adoção avalie quando implementados, pois são mais intuitivos de usar. Esse fato é apoiado pelo fato de que os aplicativos nativos dominam a tabela de classificação de cada loja de aplicativos no mercado.
Embora a abordagem de aplicativo nativo forneça o melhor desempenho, velocidade e usabilidade, ferramentas de fornecedores como Xcode (da Apple para iOS) e Android Studio (para Android) visam apenas uma única plataforma. Isso pode tornar os ciclos de desenvolvimento mais longos, mais complicados e, em última instância (aparentemente) mais caros no início, devido a várias bases de código, ciclos de perguntas e respostas, habilidades para se manter atualizado, etc. Dito isso, conforme listado acima, há muitos motivos para escolher este nativo, e muitas empresas reescreveram aplicativos híbridos em versões nativas após comentários de usuários insatisfatórios.
Aplicativos nativos (e também alguns aplicativos híbridos) podem demorar mais para configurar inicialmente, especialmente se forem implantados por meio de lojas de aplicativos, no entanto, uma vez configurados, eles são relativamente rápidos de atualizar (mas não tão rápidos quanto um aplicativo da web sem lojas de aplicativos).
A única opção no mercado que oferece aplicativos nativos verdadeiros totalmente compilados de fonte única é o Delphi . Com mais de 9 anos no mercado desde o seu lançamento, FireMonkey (FMX) amadureceu em uma estrutura altamente flexível, construída em programação moderna orientada a objetos e baseada em componentes, para alcançar uma abordagem RAD de baixo código para desenvolvimento móvel, não apenas visando Android e iOS, mas também macOS, Linux e Windows.
O que são aplicativos híbridos?
Aplicativos híbridos, como os de Sencha, Angular Mobile, React Native, Cordova, Ionic, PhoneGap, são construídos usando tecnologias da web (HTML5, CSS, JavaScript), hospedados dentro de um shell de aplicativo nativo. Em essência, eles são aplicativos da web executados localmente no telefone dentro de um micro servidor da web.
Um benefício do desenvolvimento de aplicativos híbridos é que uma base de código de fonte única pode ter como alvo várias plataformas. Além disso, o shell nativo pode permitir que a extensão da linguagem HTML alcance algumas partes do hardware do telefone – no entanto, isso é limitado em comparação ao que um aplicativo nativo pode alcançar. Os aplicativos híbridos também podem ser configurados para serem executados offline sem uma conexão ativa (se configurados dessa forma).
A principal desvantagem dos aplicativos híbridos é que eles ainda podem ter a aparência e o comportamento de uma página da web. Por exemplo, os controles podem acidentalmente obter seleção múltipla na IU por acidente. Eles também são conhecidos por sua memória insuficiente e otimização do processador, o que os torna intensivos em recursos.
A segurança também é um ponto importante a ser considerado, já que o código-fonte normalmente está em texto não criptografado dentro dos pacotes do aplicativo. Isso torna a conformidade de segurança ao trabalhar com aplicativos híbridos muito mais difícil de controlar. A injeção de código malicioso é uma preocupação real.
De todas essas opções, a mais próxima de um aplicativo nativo é React Native, que consegue oferecer acesso para usar alguns controles nativos. Tem uma boa comunidade e é apoiado pelo Facebook e outros. No entanto, o Facebook é frequentemente citado como ainda usando código de aplicativo nativo puro em locais para contornar as limitações de recursos. React Native também não é recomendado para aplicativos onde a segurança é realmente importante (como aplicativos financeiros).
Os aplicativos híbridos também são frequentemente encontrados em muitas soluções de baixo código, como Lansa , Mendix , Microsoft PowerApps e Appian , onde a abordagem tem integração com sistemas de back-end adicionais. Embora essas plataformas possam ser impressionantes para a velocidade inicial de lançamento no mercado, ainda há restrições ao que pode ser alcançado e são categorizadas por custos de operação mais altos devido ao preço por usuário que vêm com.
O que são aplicativos da web ?
Os aplicativos da web também podem ser uma forma útil de fornecer conteúdo para celulares. Embora os aplicativos da Web não estejam instalados (e precisam ter uma conexão de dados ativa), eles oferecem uma chance de mudar e atualizar rapidamente o que o usuário pode ver e fazer. Os aplicativos da web são executados por meio do navegador, de modo que o poder de computação principal é executado remotamente, o que significa que o dispositivo móvel precisa de energia e memória mínimas para executar o aplicativo da web.
O HTML5 tem alguns recursos fortes, incluindo armazenamento de dados local, que pode tornar o armazenamento de dados limitado possível, no entanto, não é exatamente aqui que você deseja armazenar dados confidenciais!
Um grande benefício de um aplicativo da web é que você pode obter quase 100% de cobertura de mercado, inclusive em plataformas móveis de nicho excepcional.
Qual é a melhor opção para o desenvolvimento de aplicativos móveis?
A resposta depende realmente dos requisitos que você tem!
Se você precisa atingir o melhor nível de segurança, desempenho e usabilidade em um aplicativo, e valoriza a flexibilidade para construir o que você precisa a qualquer momento, então os aplicativos nativos são o caminho a percorrer! A melhor escolha para o desenvolvimento de aplicativos nativos para dispositivos móveis é Delphi devido à sua abordagem de código-fonte único compilado.
Se você precisa de acesso limitado aos recursos do dispositivo móvel e a segurança dos dados não é uma grande preocupação, os aplicativos híbridos são uma abordagem válida.
Se você só precisa chegar a várias plataformas rapidamente (e não precisa de acesso aos recursos do dispositivo móvel) e a segurança não é uma preocupação, os aplicativos da web têm o potencial de oferecer muito. Uma boa opção para desenvolver aplicativos web rapidamente é o Sencha Architect (que também está incluído como uma ferramenta adicional na edição Delphi Architect , oferecendo uma escolha mais ampla entre o desenvolvimento de aplicativos web e nativos.
De todas as opções, o melhor do suporte nativo verdadeiro (velocidade, desempenho e acesso ao dispositivo) e plataforma cruzada em uma única base de código (ajudando a gerenciar custos de longo prazo) só é visto no Delphi . Embora o Delphi possa ser visto como um produto de nicho em comparação com algumas outras plataformas, os desenvolvedores (especialmente aqueles familiarizados com C #) são facilmente qualificados para o framework por uma fração do custo de executar vários projetos de desenvolvimento e manter vários conjuntos de habilidades atualizados . E com mais de 26 anos de experiência no mercado e mais de 9 anos oferecendo uma abordagem multiplataforma exclusiva, ele está, sem dúvida, anos à frente de outros em áreas.
Разработку мобильных приложений можно разделить на 3 основных типа приложений. Собственные приложения, мобильные веб-приложения и гибридные приложения.
В чем разница между нативными, гибридными и веб-приложениями?
Самый простой способ описать разницу между нативными, гибридными и веб-приложениями:
Собственные приложения — это скомпилированные двоичные файлы, которые запускаются на устройстве. Они являются самыми быстрыми и безопасными из трех вариантов.
Веб-приложения размещаются и запускаются в браузере, и для работы требуется подключение к Интернету. Веб-приложения — самый медленный вариант с наименьшим доступом к функциям устройства.
Гибридные приложения (как следует из названия) частично являются как нативными, так и частично веб-приложениями, и по скорости они находятся посередине.
По состоянию на март 2021 года на рынке доминируют Android (71,9%) и iOS (27,33%) . Это означает, что если вы планируете создать мобильный пользовательский интерфейс, приложения, ориентированные на Android и iOS, обеспечат почти полный охват рынка (99,2%).
Что такое нативные мобильные приложения?
Собственные приложения обычно пишутся и компилируются для каждой платформы, на которой они работают. Они обеспечивают самую быструю производительность и высочайший уровень безопасности, поскольку они скомпилированы и оптимизированы для оборудования. Имея полный доступ к оборудованию, они также получают полный доступ к функциям устройства, таким как биометрия, камера, датчики и т. Д. Поскольку собственные приложения используют элементы пользовательского интерфейса системы, они «вписываются» в пользовательский интерфейс платформы, достигая максимальной степени принятия. оцените при развертывании, поскольку они более интуитивно понятны в использовании. Этот факт подтверждается тем фактом, что нативные приложения доминируют в таблице лидеров для каждого магазина приложений на рынке.
В то время как подход с использованием собственных приложений обеспечивает лучшую производительность, скорость и удобство использования, инструменты поставщиков, такие как Xcode (от Apple для iOS) и Android Studio (для Android), нацелены только на одну платформу. Это может сделать циклы разработки более длинными, сложными и в конечном итоге (по-видимому) более дорогостоящими заранее из-за наличия нескольких кодовых баз, циклов вопросов и ответов, навыков, которые необходимо постоянно обновлять, и т. Д. Тем не менее, как указано выше, есть много причин для выбора этого нативного кода и многие компании переписывают гибридные приложения в нативные версии после плохих отзывов пользователей.
Нативные приложения (а также некоторые гибридные приложения) могут занять больше времени для первоначальной настройки, особенно при развертывании через магазины приложений, однако после настройки они относительно быстро обновляются (но не так быстро, как веб-приложение без магазинов приложений).
Единственный вариант на рынке, который предлагает полностью скомпилированные истинно-нативные приложения с одним исходным кодом, — это Delphi . За более чем 9 лет существования на рынке с момента своего запуска FireMonkey (FMX) превратился в очень гибкую структуру, построенную на современном объектно-ориентированном и компонентном программировании, для достижения низкокодового подхода RAD к мобильной разработке, а не только нацеливания. Android и iOS, но также macOS, Linux и Windows.
Что такое гибридные приложения?
Гибридные приложения, такие как Sencha, Angular Mobile, React Native, Cordova, Ionic, PhoneGap, создаются с использованием веб-технологий (HTML5, CSS, JavaScript) и размещаются внутри собственной оболочки приложения. По сути, это веб-приложения, работающие локально на телефоне внутри микро-веб-сервера.
Преимущество разработки гибридных приложений заключается в том, что кодовая база с одним исходным кодом может быть нацелена на несколько платформ. Кроме того, собственная оболочка может позволить расширению языка HTML достичь некоторых частей аппаратного обеспечения телефона — однако это ограничено по сравнению с тем, что может обеспечить собственное приложение. Гибридные приложения также можно настроить для работы в автономном режиме без активного подключения (если это настроено таким образом).
Основным недостатком гибридных приложений является то, что они по-прежнему могут выглядеть и вести себя как веб-страницы. Например, элементы управления могут случайно получить множественный выбор в пользовательском интерфейсе. Они также известны плохой оптимизацией памяти и процессора, что делает их ресурсоемкими.
Безопасность также является важным моментом, который следует учитывать, поскольку исходный код обычно находится в виде открытого текста внутри пакетов приложений. Это значительно усложняет соблюдение требований безопасности при работе с гибридными приложениями. Внедрение вредоносного кода — настоящее беспокойство.
Из всех этих вариантов наиболее близким к нативному приложению является React Native, которому удается предложить доступ для использования некоторых нативных элементов управления. У него хорошее сообщество, его поддерживают Facebook и другие. Однако часто упоминается, что Facebook все еще использует чистый нативный код приложения, чтобы обойти ограничения функций. React Native также не рекомендуется для приложений, в которых действительно важна безопасность (например, финансовых приложений).
Гибридные приложения также часто встречается во многих низких кодовых решениях , таких как LANSA , Mendix , Microsoft PowerApps и Аппиан , где подход имеет интеграцию с дополнительными серверными системами. Хотя эти платформы могут быть впечатляющими с точки зрения начальной скорости вывода на рынок, все еще существуют ограничения на то, что может быть достигнуто, и они относятся к категории более высоких эксплуатационных расходов из-за цены на каждого пользователя, с которой они поставляются.
Что такое веб-приложения ?
Веб-приложения также могут быть полезным способом доставки контента на мобильные устройства. Хотя веб-приложения не установлены (и должны иметь соединение для передачи данных в реальном времени), они дают возможность быстро изменять и обновлять то, что пользователь может видеть и делать. Веб-приложения запускаются через браузер, поэтому основная вычислительная мощность запускается удаленно, а это означает, что мобильному устройству требуется минимум энергии и памяти для запуска веб-приложения.
HTML5 обладает некоторыми сильными возможностями, включая локальное хранилище данных, которые могут сделать возможным ограниченное кэширование данных, однако это не совсем то место, где вы хотите хранить конфиденциальные данные!
Одним из основных преимуществ веб-приложения является то, что вы можете получить почти 100% -ный охват рынка, в том числе на исключительно нишевых мобильных платформах.
Какой лучший вариант для разработки мобильных приложений?
Ответ действительно зависит от ваших требований!
Если вам нужно достичь наилучшего уровня безопасности, производительности и удобства использования в приложении, и вы цените гибкость, позволяющую создавать все, что вам нужно, в любое время, тогда родные приложения — это то, что вам нужно! Лучшим выбором для разработки нативных приложений для мобильных устройств является Delphi из-за его подхода с использованием единого скомпилированного исходного кода.
Если вам нужен ограниченный доступ к функциям мобильного устройства и безопасность данных не является серьезной проблемой, тогда гибридные приложения — допустимый подход.
Если вам просто нужно быстро перейти на несколько платформ (и вам не нужен доступ к функциям мобильного устройства) и безопасность не вызывает беспокойства, тогда веб-приложения могут многое предложить. Хорошим вариантом для быстрой разработки веб-приложений является Sencha Architect (который также включен в качестве дополнительного инструмента в редакцию Delphi Architect , предлагая более широкий выбор между разработкой веб-приложений и разработкой собственных приложений.
Из всех вариантов лучшая из истинно-нативной (скорость, производительность и доступ к устройствам) и кроссплатформенная поддержка в единой базе кода (помогающая управлять долгосрочными затратами) видна только в Delphi . Хотя Delphi можно рассматривать как нишевый продукт по сравнению с некоторыми другими платформами, разработчики (особенно те, кто знаком с C #) легко повышают квалификацию фреймворка за небольшую часть стоимости выполнения нескольких проектов разработки и поддержания нескольких наборов навыков в актуальном состоянии. . Благодаря более чем 26-летнему опыту работы на рынке и более чем 9-летнему опыту использования уникального многоплатформенного подхода, компания, возможно, на годы опережает другие в этих областях.
Mobile app development can be categorized into the creation of 3 main types of applications. Native apps, web-based mobile apps, and hybrid apps.
What are the differences between native, hybrid, and web applications?
The simplest way to describe the difference between native, hybrid, and web apps is:
Native apps are compiled binaries that run on the device. They are the fastest and most secure of the three options.
Web Applications are hosted and run in the browser and require a connection to the internet to work. Web Apps are the slowest option, with the least access to the device features.
Hybrid applications (as the name suggests) are a bit of both – part native, and part web app, and fall in the middle in terms of speed.
As of March 2021, Android (71.9%) and iOS (27.33%) dominate the market. This means if you are looking at creating a mobile user experience/applications targetting Android and iOS will provide near full market coverage (99.2%).
What are native mobile applications?
Native applications are typically written and compiled for each platform that they run on. They provide the fastest performance and the highest levels of security as they are compiled and optimized for the hardware. With full access to the hardware, they also benefit from complete access to device features, such as biometrics, camera, sensors, etc. Because native apps use system UI elements, they “fit in” with the platform user experience, achieving the highest adoption rate when rolled out as they are more intuitive to use. This fact is backed up by the fact that native applications dominate the leaderboard for each app store in the market.
While the native application approach provides the best performance, speed, and usability, vendor tools such as Xcode (from Apple for iOS) and Android Studio (for Android) only target a single platform. This can make development cycles longer, more complicated, and ultimately (seemingly) more expensive upfront, due to multiple codebases, Q&A cycles, skills to keep updated, etc. That said, as listed above, there are many reasons to choose this native, and many companies have re-write hybrid apps into native versions following poor user feedback.
Native apps (and also some hybrid apps) can take longer to set up initially, especially if deploying via app stores, however, once set up they are relatively quick to update (but not as quick as a web app without app stores).
The one option on the market that does offer single-source, fully compiled true-native applications, is Delphi. With over 9 years in the market since its launch, FireMonkey (FMX) has matured into a highly flexible framework, built on modern object-orientated and component-based programming, to achieve a low-code RAD approach to mobile development, not only targeting Android and iOS, but also macOS, Linux, and Windows.
What are hybrid applications?
Hybrid applications, such as those from Sencha, Angular Mobile, React Native, Cordova, Ionic, PhoneGap, are built using web technologies (HTML5, CSS, JavaScript), hosted inside a native application shell. In essence, they are web apps running locally on the phone inside a micro web server.
A benefit of hybrid application development is a single-source codebase can target multiple platforms. Additionally, the native shell can allow the extension of the HTML language to reach some parts of the phone’s hardware – However, this is limited in comparison to what a native application can achieve. Hybrid applications can also be set to run offline without a live connection (if configured that way).
The main downside of hybrid applications is that they can still look and behave like a web page. For example, controls can accidentally get multi-selecting in the UI by accident. They are also well known for poor memory and processor optimization, making them resource-intensive.
Security is also a major point to consider, as the source code is typically in clear text inside the application bundles. This makes security compliance when working with Hybrid apps a lot harder to keep on top of. Malicious code injection is a real worry.
Of all these options, the closest to a native application is React Native, which manages to offer access to use some native controls. It has a good community and is backed by Facebook and others. However, Facebook is often quoted as still using pure native application code in places to work around feature limitations. React Native is also not recommended for apps where security is really important (such as financial apps).
Hybrid applications are also often found in many low-code solutions such as Lansa, Mendix, Microsoft PowerApps, and Appian, where the approach has integration to additional backend systems. While these platforms can be impressive for initial speed to market, there are still restrictions to what can be achieved and are categorized by higher running costs due to the per-user pricing they come with.
What are web applications?
Web apps can also be a useful way to deliver content to mobiles. While Web Apps are not installed, (and do need to have a live data connection) they do offer a chance to rapidly change and update what the user can see and do. Web Apps are run via the browser so the main computing power is run remotely, meaning the mobile device needs minimal power and memory to run the web app.
HTML5 has some strong capabilities, including local data storage, that can make limited caching of data possible, however, this is not exactly where you want to store sensitive data!
One major benefit of a web application is that you can get almost 100% market coverage, including on exceptionally niche mobile platforms.
What is the best option for mobile app development?
The answer is really down to the requirements you have!
If you need to achieve the best level of security, performance, and usability in an application, and value the flexibility to build whatever you need at any time, then native apps are the way to go! The best choice for native app development for mobile is Delphi due to its compiled single source code approach.
If you need limited access to mobile device features and data security isn’t a major concern, then hybrid apps are a valid approach.
If you just need to get to multiple platforms quickly (and don’t need access to mobile device features) and security isn’t a worry, then web-apps have the potential to offer a lot. A good option for rapidly developing web apps is Sencha Architect (which is also included as an additional tool in the Delphi Architect edition, offering a wider choice between web and native app development.
Of all the options, the best of true-native, (speed, performance, and device access) and cross-platform support in a single code base (helping manage long-term costs) is only seen in Delphi. While Delphi may be seen as a niche product compared to some other platforms, developers (especially those familiar with C#) are easily up-skilled to the framework at a fraction of the cost of running multiple development projects and keeping multiple skill sets up to date. And with 26+ years of market experience, and 9+ years offering a unique multi-platform approach, it’s arguably years ahead of others in areas.
すでに、10.4.2のトライアル版が利用可能になっており、今後製品を購入いただくと、10.4.2をダウンロードいただけるようになります。また、すでに製品をお持ちの方は、有効なアップデートサブスクリプションがあれば、既存のライセンスを使用してRAD Studio 10.4.2をご利用いただけます。10.4.2のダウンロードは、新しいカスタマーポータルサイト(my.embarcadero.com)から行えます。
RAD Studio 10.4.2リリースに関する最初のWebセミナーを終えましたが、1,000人を超える人に参加いただき、3つのセッションで何時間にもわたって質問が続きました。3つのセッションを編集したビデオは、すべての質問を含めると4時間以上の長さになります。製品管理チームが多くのことを実現した10.4.2のリリースに関しては、熱い反応がありました。このリリースによって、メジャーバージョン「10.4 Sydney」は、成熟した段階へと到達しており、多くのユーザーがアップグレードするようになっています。私たちは、お客様に満足いただくことを願っていますが、RAD Studioのような多様性を持つ製品では、ときにそれが困難な場合があります。対処すべきすべての品質上の問題に対処することはできませんが、このリリースでは(10.4.1よりも多い)600以上の問題に対応しており、よりよい方向へと進んでいると考えています。現在、10.4.2をご利用いただくためのさまざまなキャンペーンを実施していますので、購入には最適なタイミングです。
CodeSite is an advanced debugging and application logging system that gives developers deeper insight into how their code is executing. One of the benefits of this is that you can locate problems more quickly and easily ensure their application is running correctly.
CodeSite can do live logging and file logging, it can be local and remote based on the configuration.
Obtaining information is at the core of CodeSite. Developers recognize the data that they would like to capture and then call a relevant method of a CodeSite logger instance. Moreover, you can inspect object properties, inspect XML data, inspect image files and inspect datasets.
Key Features:
Logging classes provide extensive methods for capturing all kinds of information
Does not interrupt application flow
Easily log complex data structures
Group logging information by user-defined categories
One of the powerful peer-to-peer communication solutions for Delphi and C++ Builder developers is called IPWorks IPC. It is a suite of components for inter-process communications (IPC) through Named Pipes. The component set combines client, server, and remote execution components facilitating straightforward peer-to-peer communications between related or unrelated processes.
With the IPWorks IPC component set are powerful client, server, and external process execution components for adding inter-process communications to Desktop and Web applications. The components give 100% native performance and thread-safe in critical situations.
When you install you will get 3 main components:
PipeClient – A simple Client for connecting and communicating over named pipes
PipeExec – Provides an easy way to start and interact with a process over standard input, output, and error
PipeServer – A lightweight server component based on an asynchronous, event-driven architecture. It designed to balance the load between connections by default
RAD Studio 10.4.2では、UI 操作なしで行うDelphi、C++ Builder、および RAD Studio のサイレントな自動インストールをサポートしており、個々の開発者や、大規模な開発チームのデプロイメント、および IT プロフェッショナルに、高速で効率的なインストールの選択肢を提供します。
サイレントインストールは、追加で、任意のコマンドライン パラメータを、セットアップ プログラムに渡すことで有効にできます。サイレント インストーラにより開発者は、コア IDE と、そのエディションで使用可能なプラットフォームおよび機能をインストールできます。
すでに、10.4.2のトライアル版が利用可能になっており、今後製品を購入いただくと、10.4.2をダウンロードいただけるようになります。また、すでに製品をお持ちの方は、有効なアップデートサブスクリプションがあれば、既存のライセンスを使用してRAD Studio 10.4.2をご利用いただけます。10.4.2のダウンロードは、新しいカスタマーポータルサイト(my.embarcadero.com)から行えます。
Pro-Workoutは、「Time Under Tension」の概念を適用することで、より良いトレーニングを行えるアプリです。 Delphiで開発されたこのアプリを使用すれば、どんな複雑なワークアウトでも、ワークアウトに必要なタイミングの詳細をすべて入力することができます。開発者は次のように解説しています。
TRichView Component is a set of native Delphi and C++ Builder VCL components for illustrating, editing, and printing complex rich text documents easily.
TRichView component suite offers a wide range of functionalities to create advanced text editors, web/help/book authoring applications, chats, and messengers. Moreover, if you need a high-quality hypertext user interface, the TRichView component can be handy.
The best part is TRichView is completely written in Delphi, it does not require external files. Let’s explore some premium features that you can get from the TRichView component suite.
Das sind aufregende Zeiten! Wenn die Impfstoffe voranschreiten und die Sperrung nachlässt, sind wir bereit, wieder ins Geschäft zu kommen. Wie viele von Ihnen haben wir nie aufgehört. Die harte Arbeit unserer Teams gipfelte in der Veröffentlichung von 10.4.2, während unsere zahlreichen Initiativen vorangetrieben wurden, da unsere Zusammenarbeit mit Partnern und der Open-Source-Community weiter zunimmt. Der Februar ist für uns immer etwas Besonderes, da wir Delphis Geburtstag feiern – er wird immer jünger!
RAD Studio 10.4.2 veröffentlicht!
Wir haben gerade unser erstes Webinar zur Veröffentlichung von RAD Studio 10.4.2 abgeschlossen. Wir hatten über 1.000 Teilnehmer mit stundenlangen Fragen in den drei Sitzungen. Nach dem Bearbeiten der drei Sitzungen dauert die Wiederholung mehr als vier Stunden , um alle Fragen abzudecken! Um 10.4.2 herum herrscht eine fantastische Begeisterung, die unser Produktmanagement-Team dazu veranlasst hat, mehr zu tun. Mit dieser Version erreicht die Hauptversion 10.4 von Sydney einen hervorragenden Reifegrad, wobei die Rate der Upgrades deutlich zunimmt. Wir lieben es, wenn Kunden zufrieden sind, aber bei einem vielfältigen Produkt wie RAD Studio kann dies manchmal eine Herausforderung sein. Wir können niemals alle Qualitätsprobleme angehen, die wir wollen, aber mit über 600 Problemen, die in dieser Version behandelt werden (mehr als in 10.4.1), bewegen wir uns in die richtige Richtung. Im Moment gibt esViele exzellente Aktionen , um 10.4.2 zugänglicher zu machen. Dies ist also der perfekte Zeitpunkt, um zu kaufen.
Glückliche 26 th Delphi!
Wir nehmen uns immer Zeit, um Delphis Geburtstag zu feiern . Ich habe vorhin scherzhaft gesagt, dass Delphi jünger wird, und in gewisser Weise ist es das auch. Die Benutzerbasis von Delphi wird von Tag zu Tag jünger, da immer mehr junge Menschen Delphi entdecken und sich in Delphi verlieben. Ihr Vertrauen sollte hoch sein! Sie haben wahrscheinlich bemerkt, dass die Menge an Inhalten in der Community explodiert ist (siehe unseren Blog und DelphiFeeds ). Ich möchte einige Elemente hervorheben, die ich liebe.
Wir haben uns mit einigen MVPs zusammengetan, um ein großartiges Whitepaper mit einer Reihe von Blog-Posts zu erstellen , in denen Delphi mit anderen Windows-Lösungen verglichen wird. Es überrascht nicht, dass Delphi unglaublich gut abschneidet. Es ist ein großartiges Stück, um es mit Skeptikern zu teilen, und sie können die Tests selbst reproduzieren.
Der andere lustige Gegenstand ist der Showcases-Wettbewerb . Es ist immer schwierig, gute Fallstudien in der Entwicklerwelt zu erhalten, aber das Betrachten von Beispielen ist so nützlich. Wir haben jetzt mehr als 200 Vitrinen im Blog – und sie wachsen . Dies ist eine sehr organische Anstrengung, daher ermutige ich Sie , Ihre Beispiele einzureichen . Lassen Sie uns all diese jungen Entwickler inspirieren und zusammenwachsen.
Mehr in GetIt Now
Im letzten Quartal haben wir eine Online-Version von GetIt eingeführt , die ein Verzeichnis für verfügbare Plug-Ins für RAD Studio sein soll. Google ist zwar eine großartige Ressource, um diese zu finden, aber eine dedizierte und einfach zu navigierende Website ist auch sehr nützlich, und wir haben viele verschiedene Plugins und Bibliotheken. Zum Beispiel ist RAD Studio ein sehr leistungsfähiges Tool im Gaming-Bereich, wie Sie aus den vielen Gaming- Showcases ersehen können. Es ist jedoch nicht einfach, alle zum Erstellen von Spielen erforderlichen Tools zu finden. Wir konzentrieren uns weiterhin darauf, das Auffinden solcher Ressourcen zu vereinfachen, sodass in dieser Richtung viel mehr passieren wird. Wir sind in hohem Maße auf unsere Partner und MVPs angewiesen, um das Ökosystem weiter zu verbessern, aber wir sind wirklich darauf angewiesen, dass Sie alle uns auch helfen.
Weitere Ressourcen für Learning, C ++ und InterBase
Vor ungefähr einem Jahr haben wir Kyle Wheeler als neuen General Manager unter der Marke Embarcadero an Bord geholt. Sein Ziel ist es, den Fokus auf C ++ und InterBase zu erhöhen und gleichzeitig die Bemühungen mit Whole Tomato zu synchronisieren . Wenn Sie nicht mit Whole Tomato vertraut sind, handelt es sich um ein weiteres Unternehmen von Idera Dev Tools , das Visual Assist bereitstellt, eines der beliebtesten C ++ – Plugins für Visual Studio. Kyle baut seine Rolle weiter aus und hat viele Lernanstrengungen für Delphi übernommen, indem er eng mit Jim McKeeth und vielen anderen zusammengearbeitet hat. Kyle wird in Kürze seinen eigenen Beitrag veröffentlichen, um über seine Bemühungen auf dem Laufenden zu bleiben.
¡Son tiempos emocionantes! A medida que avanzan las vacunas y se alivian los cierres, estamos listos para volver al trabajo. Como muchos de ustedes, nunca nos detuvimos. El arduo trabajo de nuestros equipos culminó con el lanzamiento de 10.4.2 mientras avanzamos en nuestras muchas iniciativas, a medida que nuestra colaboración con socios y la comunidad de código abierto continúa creciendo. Febrero siempre es especial para nosotros ya que celebramos el cumpleaños de Delphi: ¡cada vez es más joven!
¡Lanzamiento de RAD Studio 10.4.2!
Acabamos de completar nuestro primer seminario web para el lanzamiento de RAD Studio 10.4.2. Tuvimos más de 1,000 participantes con preguntas que duraron horas en las tres sesiones. Después de editar las tres sesiones, la repetición dura más de cuatro horas para cubrir todas las preguntas. Hay un entusiasmo fantástico en torno a 10.4.2 que tiene a nuestro equipo de gestión de productos animado para hacer más. Esta versión lleva a la versión principal 10.4 de Sydney a un excelente nivel de madurez, y la tasa de actualizaciones aumenta considerablemente. Nos encanta cuando los clientes están contentos, pero con un producto diverso como RAD Studio, esto a veces puede ser un desafío. Nunca podremos abordar todos los problemas de calidad que queremos, pero con más de 600 problemas abordados en esta versión (más que en 10.4.1), estamos avanzando en la dirección correcta. Ahora mismo haymuchas promociones excelentes para hacer que 10.4.2 sea más accesible, por lo que este es un momento perfecto para comprar.
Feliz 26 º Delphi!
Siempre nos tomamos nuestro tiempo para celebrar el cumpleaños de Delphi . En broma dije antes que Delphi se está volviendo más joven, y en cierto modo lo es. La base de usuarios de Delphi se vuelve más joven cada día a medida que más jóvenes descubren y se enamoran de Delphi. ¡Tu confianza debe ser alta! Probablemente haya notado que la cantidad de contenido en la comunidad se ha disparado (vea nuestro blog y DelphiFeeds ). Quiero resaltar un par de elementos que me encantan.
Nos asociamos con algunos MVP para crear un excelente documento técnico con una serie de publicaciones de blog que comparan Delphi con otras soluciones de Windows. Como era de esperar, Delphi lo hizo increíblemente bien. Es una gran pieza para compartir con los escépticos, y ellos pueden reproducir las pruebas por su cuenta.
El otro elemento divertido es el concurso de vitrinas . Siempre es difícil obtener buenos estudios de casos en el mundo de los desarrolladores, pero mirar ejemplos es muy útil. Ahora tenemos más de 200 vitrinas en el blog y seguimos creciendo . Este es un esfuerzo muy orgánico, así que les animo a enviar sus ejemplos . Inspiremos a todos estos jóvenes desarrolladores y crezcamos juntos.
Más en GetIt Now
El trimestre pasado presentamos una versión en línea de GetIt que pretende ser un directorio de complementos disponibles para RAD Studio. Si bien Google es un gran recurso para encontrarlos, tener un sitio dedicado y fácil de navegar también es muy útil, y tenemos muchos complementos y bibliotecas diferentes. Por ejemplo, RAD Studio es una herramienta muy poderosa en el espacio de los juegos, como puede ver en las muchas vitrinas de juegos , sin embargo, encontrar todas las herramientas necesarias para crear juegos no es fácil. Continuamos enfocándonos en hacer que dichos recursos sean más fáciles de encontrar, por lo que verá que ocurren muchas más cosas en esta dirección. Dependemos en gran medida de nuestros socios y MVP para continuar mejorando el ecosistema, pero realmente confiamos en todos ustedes para ayudarnos también.
Recursos más dedicados al aprendizaje, C ++ e InterBase
Hace aproximadamente un año, incorporamos a Kyle Wheeler como nuestro nuevo Gerente General bajo la marca Embarcadero. Sus objetivos son aumentar el enfoque en C ++ e InterBase mientras ayuda a sincronizar los esfuerzos con Whole Tomato . Si no está familiarizado con Whole Tomato, es otro negocio de Idera Dev Tools que proporciona Visual Assist, uno de los complementos de C ++ más populares para Visual Studio. Kyle continúa ampliando su función y ha asumido muchos de los esfuerzos de aprendizaje para Delphi, trabajando en estrecha colaboración con Jim McKeeth y muchos otros. Kyle pronto publicará su propia publicación para proporcionar actualizaciones sobre sus esfuerzos.
Estes são tempos emocionantes! À medida que as vacinas avançam e os bloqueios são mais fáceis, estamos prontos para voltar ao trabalho. Como muitos de vocês, nunca paramos. O trabalho árduo de nossas equipes culminou com o lançamento do 10.4.2 enquanto avançamos em nossas muitas iniciativas, à medida que nossa colaboração com parceiros e a comunidade de código aberto continua a crescer. Fevereiro é sempre especial para nós, pois comemoramos o aniversário da Delphi – está ficando cada vez mais jovem!
Lançado RAD Studio 10.4.2!
Acabamos de concluir nosso primeiro webinar para o lançamento do RAD Studio 10.4.2. Tivemos mais de 1.000 participantes com perguntas que duraram horas nas três sessões. Depois de editar as três sessões, o replay tem mais de quatro horas de duração para cobrir todas as questões! Há um entusiasmo fantástico em torno do 10.4.2, que faz com que nossa equipe de gerenciamento de produtos se anime para fazer mais. Esta versão coloca a versão principal 10.4 de Sydney em um excelente nível de maturidade, com a taxa de atualizações aumentando. Adoramos quando os clientes estão felizes, mas com um produto diversificado como o RAD Studio, isso às vezes pode ser um desafio. Nunca poderemos abordar todos os problemas de qualidade que desejamos, mas com mais de 600 problemas abordados nesta versão (mais do que em 10.4.1), estamos nos movendo na direção certa. Agora existemmuitas promoções excelentes para tornar o 10.4.2 mais acessível, então este é o momento perfeito para comprar.
Feliz 26 th Delphi!
Sempre aproveitamos para comemorar o aniversário da Delphi . Eu já disse brincando que o Delphi está ficando mais jovem, e de certa forma está. A base de usuários da Delphi fica mais jovem a cada dia à medida que mais jovens descobrem e se apaixonam pela Delphi. Sua confiança deve ser alta! Você provavelmente notou que a quantidade de conteúdo na comunidade explodiu (veja nosso blog e DelphiFeeds ). Quero destacar alguns itens que adoro.
Nós nos associamos a alguns MVPs para construir um excelente white paper com uma série de postagens de blog que comparam o Delphi com outras soluções do Windows. Não surpreendentemente, Delphi se saiu incrivelmente bem. É uma ótima peça para compartilhar com os céticos, e eles podem reproduzir os testes por conta própria.
O outro item divertido é a Competição de Vitrines . É sempre difícil obter bons estudos de caso no mundo dos desenvolvedores, mas olhar para exemplos é muito útil. Agora temos mais de 200 vitrines no blog – e crescendo . Este é um esforço muito orgânico, então eu o encorajo a enviar seus exemplos . Vamos inspirar todos esses jovens desenvolvedores e crescer juntos.
Mais em GetIt Now
No último trimestre, apresentamos uma versão online do GetIt que visa ser um diretório para plug-ins disponíveis para RAD Studio. Embora o Google seja um ótimo recurso para encontrá-los, ter um site dedicado e fácil de navegar também é muito útil, e temos muitos plug-ins e bibliotecas diferentes. Por exemplo, o RAD Studio é uma ferramenta muito poderosa no espaço de jogos, como você pode ver em muitos mostruários de jogos , mas encontrar todas as ferramentas necessárias para criar jogos não é fácil. Continuamos a nos concentrar em tornar esses recursos mais fáceis de encontrar, então você verá muito mais acontecendo nessa direção. Dependemos muito de nossos parceiros e MVPs para continuar a melhorar o ecossistema, mas realmente contamos com a ajuda de todos vocês.
Mais recursos dedicados ao aprendizado, C ++ e InterBase
Há cerca de um ano, trouxemos Kyle Wheeler a bordo como nosso novo Gerente Geral sob a marca Embarcadero. Seus objetivos são aumentar o foco em C ++ e InterBase enquanto ajuda a sincronizar esforços com o Whole Tomato . Se você não está familiarizado com o Whole Tomato, é outro negócio de Idera Dev Tools , fornecendo Visual Assist, um dos plug-ins C ++ mais populares para Visual Studio. Kyle continua a expandir sua função e assumiu muitos dos esforços de aprendizado para a Delphi, trabalhando em estreita colaboração com Jim McKeeth e muitos outros. Kyle em breve publicará sua própria postagem para fornecer atualizações sobre seus esforços.
Это захватывающие времена! По мере того, как вакцины становятся все более доступными и ограничиваются, мы готовы вернуться к работе. Как и многие из вас, мы никогда не останавливались. Усердная работа наших команд завершилась выпуском версии 10.4.2, продвигающей наши многочисленные инициативы, поскольку наше сотрудничество с партнерами и сообществом разработчиков ПО с открытым исходным кодом продолжает расти. Февраль для нас всегда особенный, ведь мы отмечаем день рождения Дельфи — он становится все моложе и моложе!
Выпущена RAD Studio 10.4.2!
Мы только что завершили наш первый веб-семинар по выпуску RAD Studio 10.4.2. У нас было более 1000 участников, которые задавали многочасовые вопросы в течение трех сессий. После редактирования трех сессий реплей длится более четырех часов, чтобы охватить все вопросы! Вокруг 10.4.2 есть фантастический энтузиазм, который заставил нашу команду управления продуктами делать больше. В этом выпуске основная версия 10.4 Sydney находится на превосходном уровне зрелости, при этом количество обновлений значительно увеличивается. Нам нравится, когда клиенты довольны, но с таким разнообразным продуктом, как RAD Studio, это иногда может быть непросто. Мы никогда не сможем решить все проблемы качества, которые хотим решить, но, поскольку в этом выпуске решено более 600 проблем (больше, чем в 10.4.1), мы движемся в правильном направлении. Прямо сейчас естьмножество отличных рекламных акций, которые сделают 10.4.2 более доступным, так что сейчас идеальное время для покупки.
С 26- м Дельфи!
Мы всегда отмечаем день рождения Delphi . Я раньше в шутку сказал, что Delphi молодеет, и в каком-то смысле это так. База пользователей Delphi становится моложе с каждым днем, поскольку все больше молодых людей открывают для себя Delphi и влюбляются в нее. Ваша уверенность должна быть высокой! Вы, наверное, заметили, что количество контента в сообществе резко возросло (см. Наш блог и DelphiFeeds ). Я хочу выделить пару вещей, которые мне нравятся.
Мы объединились с некоторыми MVP, чтобы создать отличный технический документ с серией сообщений в блогах, которые сравнивают Delphi с другими решениями Windows. Неудивительно, что у Delphi все получилось очень хорошо. Это отличный материал, которым можно поделиться со скептиками, и они могут воспроизвести тесты самостоятельно.
Другой интересный предмет — это конкурс витрин . Получить хорошие кейсы в мире разработчиков всегда сложно, но просмотр примеров очень полезен. Сейчас у нас более 200 витрин в блоге — и их количество продолжает расти . Это очень органичная попытка, поэтому я призываю вас присылать свои примеры . Давайте вдохновим всех этих молодых разработчиков и будем расти вместе.
Больше в GetIt Now
В прошлом квартале мы представили онлайн-версию GetIt, которая призвана стать каталогом доступных подключаемых модулей для RAD Studio. Хотя Google является отличным ресурсом для их поиска, наличие специального и удобного для навигации сайта также очень полезно, и у нас есть много различных плагинов и библиотек. Например, RAD Studio — очень мощный инструмент в игровой сфере, как вы можете видеть из множества игровых демонстраций , но найти все инструменты, необходимые для создания игр, непросто. Мы по-прежнему делаем упор на упрощение поиска таких ресурсов, поэтому вы увидите, что в этом направлении происходит гораздо больше. Мы очень зависим от наших партнеров и MVP, чтобы продолжать улучшать экосистему, но мы действительно рассчитываем на вашу помощь и нам.
Дополнительные ресурсы по обучению, C ++ и InterBase
Около года назад мы пригласили Кайла Уиллера в качестве нашего нового генерального директора под брендом Embarcadero. Его цель — сосредоточить внимание на C ++ и InterBase, одновременно помогая синхронизировать усилия с Whole Tomato . Если вы не знакомы с Whole Tomato, это еще один бизнес Idera Dev Tools , предоставляющий Visual Assist, один из самых популярных подключаемых модулей C ++ для Visual Studio. Кайл продолжает расширять свою роль и взял на себя много усилий по обучению Delphi, тесно сотрудничая с Джимом Маккитом и многими другими. Кайл скоро опубликует свой пост, в котором будет рассказано о своих усилиях.
These are exciting times! As vaccines advance and lockdowns ease, we are ready to get back to business. Like many of you, we never stopped. The hard work of our teams culminated in the release of 10.4.2 while advancing our many initiatives, as our collaboration with partners and the open-source community continues to grow. February is always special for us as we celebrate Delphi’s birthday—it is getting younger and younger!
RAD Studio 10.4.2 Released!
We just completed our first webinar for the release of RAD Studio 10.4.2. We had over 1,000 participants with questions lasting for hours across the three sessions. After editing the three sessions down, the replay is over four hours long to cover all the questions! There is fantastic enthusiasm around 10.4.2 that has our Product Management team pumped up to do more. This release gets the major 10.4 Sydney version to an excellent level of maturity, with the rate of upgrades nicely increasing. We love when customers are happy, but with a diverse product like RAD Studio, this can sometimes be challenging. We can never address all the quality issues we want to, but with over 600 issues addressed in this release (more than in 10.4.1), we are moving in the right direction. Right now there are plenty of excellent promotions to make 10.4.2 more accessible, so this is a perfect time to buy.
Happy 26th Delphi!
We always take our time to celebrate Delphi’s birthday. I jokingly said earlier that Delphi is getting younger, and in a way it is. Delphi’s user base gets younger every day as more young people discover and fall in love with Delphi. Your confidence should be high! You have probably noticed that the amount of content in the community has exploded (see our blog & DelphiFeeds.) I want to highlight a couple of items that I love.
We teamed up with some MVPs to build a great whitepaper with a series of blog posts that benchmarks Delphi to other Windows solutions. Not surprisingly, Delphi did incredibly well. It is a great piece to share with skeptics, and they can reproduce the tests on their own.
The other fun item is the Showcases Competition. It is always tough to get good case studies in the developer world, but looking at examples is so useful. We now have more than 200 showcases on the blog—and growing. This is a very organic effort, so I encourage you to submit your examples. Let’s inspire all these young developers and grow together.
More in GetIt Now
Last quarter we introduced an online version of GetIt that aims to be a directory for available plug-ins for RAD Studio. While Google is a great resource to find these, having a dedicated and easy-to-navigate site is also very useful, and we have many different plugins and libraries. For example, RAD Studio is a very powerful tool in the gaming space as you can see from the many gaming showcases, yet finding all the tools necessary for building games is not easy. We continue to focus on making such resources easier to find, so you will see a lot more happening in this direction. We are highly reliant on our partners and MVPs to continue to improve the ecosystem, but we really rely on all of you to help us too.
More Dedicated Resources to Learning, C++, and InterBase
About a year ago, we brought Kyle Wheeler on board as our new General Manager under the Embarcadero brand. His goals are to increase focus on C++ and InterBase while helping synchronize efforts with Whole Tomato. If you’re not familiar with Whole Tomato, it is another Idera Dev Tools business, providing Visual Assist, one of the most popular C++ plugins for Visual Studio. Kyle continues to expand his role and has taken on a lot of the learning efforts for Delphi, working closely with Jim McKeeth and many others. Kyle will soon publish his own post to provide updates on his efforts.
RAD Studio 10.4.2 unterstützt die unbeaufsichtigte, automatisierte Installation des Produkts ohne Interaktion mit der Benutzeroberfläche. Die unbeaufsichtigte Installation ist für die Offline- und Online-Installation verfügbar. In der Offline-Installation gibt es eine GOF-Datei und die ausführbare Setup-Datei. In der Online-Installation gibt es eine ausführbare Setup-Datei und einen automatischen Download der erforderlichen Pakete im Hintergrund. Die Windows SDK-Installation beginnt derzeit mit einer Benutzeroberfläche und beachtet nicht die Regel der unbeaufsichtigten Installation. In beiden Szenarien (lautlos und sehr leise) werden Sie vom stillen Installationsprogramm weiterhin aufgefordert, die EULA zu bestätigen. Drücken Sie zu Beginn des Vorgangs Y (für Ja). Nach dieser ersten Eingabe wird keine weitere Interaktion angefordert.
Sehen Sie sich an, wie Sie die unbeaufsichtigte Installation mit den folgenden Befehlszeilenparametern konfigurieren können.
Installer-Befehlszeilenparameter
Die unbeaufsichtigte Installation kann aktiviert werden, indem zusätzliche optionale Befehlszeilenparameter an das Setup-Programm übergeben werden (wenn Sie keinen Parameter übergeben, wird eine regelmäßige Installation durchgeführt). Die Setup-Befehlszeilenparameter, die für den unbeaufsichtigten Installationsmodus erforderlich sind, sind folgende:
Command
Description
/SILENT
Runs the installer in silent mode. The progress window is displayed.
/VERYSILENT
Runs the installer in very silent mode. No windows are displayed.
/SUPRESSMSGBOXES
Suppresses messagge boxes. This has an effect only when combined with/SILENT and/VERYSILENT.
/NOCANCEL
Disables canceling the installation process.
/NORESTART
Prevents the installer from restarting the system even if it is necessary.
/DIR="x:dirpath"
Overrides the default install directory.
/SLIPFILE="x:filepath"
Installs a license file.
/FEATURES=featureid
Indicates the feature(s) to install, separated with “ ; „. See the list below for the available features‘ names.
/LOG="x:filepath"
Causes setup to create a log file for debugging the installation process. If the file cannot be created, Setup will abort with an error message.
Hinweis: Vor der unbeaufsichtigten Installation von RAD Studio sollte eine Lizenzdatei auf dem Zielcomputer installiert werden. Sie können sie auch mithilfe der /SLIPFILE Option installieren .
Installierbare Kernfunktionen
Der Befehlszeilenparameter / FEATURES übernimmt Feature-IDs aus der folgenden Tabelle. Mit dem unbeaufsichtigten Installationsprogramm können Sie die Kern-IDE und eine der folgenden Plattformen und Funktionen installieren (die Verfügbarkeit der Funktionen hängt auch von der Lizenz ab, die als Parameter übergeben wird). Dies sind die IDs der verfügbaren Funktionen:
Feature ID
Description
delphi
Installs all Delphi platforms
delphi_windows
Installs Delphi Windows platform
delphi_macos
Installs Delphi macOS platform
delphi_linux
Installs Delphi Linux platform
delphi_ios
Installs Delphi iOS platform
delphi_android
Installs Delphi Android platform
cbuilder
Installs all C++ Builder platforms
cbuilder_windows
Installs C++ Builder Windows platform
cbuilder_ios
Installs C++ Builder iOS platform
cbuilder_android
Installs C++ Builder Android platform
french
Installs French language pack
german
Installs German language pack
japanese
Installs Japanese language pack
samples
Installs Samples
help
Installs Help files
teechart
Installs TeeChart components
dunit
Installs DUnit components
interbase_express
Installs InterBase Express components
interbase_2020
Installs InterBase 2020
openjdk
Installs AdoptOpenJDK
android_sdk
Installs AndroidSDK
Hier ist eine Beispielbefehlszeile, um das Programm unbeaufsichtigt zu installieren und alle Delphi- und C ++ Builder-Plattformen zu installieren (die Lizenzdatei sollte installiert sein, bevor Sie diese ausführen):
RAD Studio 10.4.2 admite la instalación silenciosa y automatizada del producto sin interacción con la interfaz de usuario. La instalación silenciosa está disponible para la instalación en línea y sin conexión. En la instalación sin conexión hay un archivo GOF y el ejecutable de instalación. En la instalación en línea hay un ejecutable de instalación y una descarga automática de los paquetes requeridos en segundo plano. La instalación del SDK de Windows comienza actualmente con una interfaz de usuario y no respeta la regla de “instalación silenciosa”. Además, en ambos escenarios (silencioso y muy silencioso), el instalador silencioso aún le pedirá que confirme el EULA. Presione Y (para Sí) al comienzo del proceso, no solicitará más interacción después de esta entrada inicial.
Eche un vistazo a cómo puede configurar la instalación silenciosa con los parámetros de la línea de comandos a continuación.
Parámetros de la línea de comandos del instalador
La instalación silenciosa se puede activar pasando parámetros de línea de comandos opcionales adicionales al programa de instalación (si no pasa ningún parámetro, se realiza una instalación regular). Los parámetros de la línea de comandos de instalación que se requieren para el modo de instalación silenciosa son los siguientes:
Command
Description
/SILENT
Runs the installer in silent mode. The progress window is displayed.
/VERYSILENT
Runs the installer in very silent mode. No windows are displayed.
/SUPRESSMSGBOXES
Suppresses messagge boxes. This has an effect only when combined with/SILENT and/VERYSILENT.
/NOCANCEL
Disables canceling the installation process.
/NORESTART
Prevents the installer from restarting the system even if it is necessary.
/DIR="x:dirpath"
Overrides the default install directory.
/SLIPFILE="x:filepath"
Installs a license file.
/FEATURES=featureid
Indicates the feature(s) to install, separated with ” ; “. See the list below for the available features’ names.
/LOG="x:filepath"
Causes setup to create a log file for debugging the installation process. If the file cannot be created, Setup will abort with an error message.
Nota: Se debe instalar un archivo de licencia en la máquina de destino antes de instalar RAD Studio silenciosamente o puede instalarlo usando la /SLIPFILE opción.
Características principales instalables
El parámetro de línea de comando / FEATURES toma los identificadores de características de la siguiente tabla. El instalador silencioso le permite instalar el IDE principal y cualquiera de las siguientes plataformas y funciones (la disponibilidad de las funciones también depende de la licencia que se pasa como parámetro). Estos son los ID de las funciones disponibles:
Feature ID
Description
delphi
Installs all Delphi platforms
delphi_windows
Installs Delphi Windows platform
delphi_macos
Installs Delphi macOS platform
delphi_linux
Installs Delphi Linux platform
delphi_ios
Installs Delphi iOS platform
delphi_android
Installs Delphi Android platform
cbuilder
Installs all C++ Builder platforms
cbuilder_windows
Installs C++ Builder Windows platform
cbuilder_ios
Installs C++ Builder iOS platform
cbuilder_android
Installs C++ Builder Android platform
french
Installs French language pack
german
Installs German language pack
japanese
Installs Japanese language pack
samples
Installs Samples
help
Installs Help files
teechart
Installs TeeChart components
dunit
Installs DUnit components
interbase_express
Installs InterBase Express components
interbase_2020
Installs InterBase 2020
openjdk
Installs AdoptOpenJDK
android_sdk
Installs AndroidSDK
Aquí hay una línea de comando de ejemplo para instalar silenciosamente el programa e instalar todas las plataformas Delphi y C ++ Builder (el archivo de licencia debe instalarse antes de ejecutar esto):
O RAD Studio 10.4.2 oferece suporte à instalação silenciosa e automatizada do produto sem interação com a IU. A instalação silenciosa está disponível para instalação offline e online. Na instalação offline, há um arquivo GOF e o executável de configuração. Na instalação online, há um executável de configuração e um download automático dos pacotes necessários em segundo plano. A instalação do Windows SDK atualmente começa com uma IU e não respeita a regra de “instalação silenciosa”. Além disso, em ambos os cenários (silencioso e muito silencioso), o instalador silencioso ainda solicitará que você confirme o EULA. Pressione Y (para Sim) no início do processo, ele não solicitará mais interação após esta entrada inicial.
Dê uma olhada em como você pode configurar a instalação silenciosa com os parâmetros de linha de comando abaixo.
Parâmetros da linha de comando do instalador
A instalação silenciosa pode ser ativada passando parâmetros de linha de comando opcionais adicionais para o programa de instalação (se você não passar nenhum parâmetro, uma instalação regular é executada). Os parâmetros da linha de comando de instalação necessários para o modo de instalação silenciosa são os seguintes:
Command
Description
/SILENT
Runs the installer in silent mode. The progress window is displayed.
/VERYSILENT
Runs the installer in very silent mode. No windows are displayed.
/SUPRESSMSGBOXES
Suppresses messagge boxes. This has an effect only when combined with/SILENT and/VERYSILENT.
/NOCANCEL
Disables canceling the installation process.
/NORESTART
Prevents the installer from restarting the system even if it is necessary.
/DIR="x:dirpath"
Overrides the default install directory.
/SLIPFILE="x:filepath"
Installs a license file.
/FEATURES=featureid
Indicates the feature(s) to install, separated with ” ; “. See the list below for the available features’ names.
/LOG="x:filepath"
Causes setup to create a log file for debugging the installation process. If the file cannot be created, Setup will abort with an error message.
Nota: Um arquivo de licença deve ser instalado na máquina de destino antes de instalar silenciosamente o RAD Studio ou você pode instalá-lo usando a /SLIPFILE opção.
Principais recursos instaláveis
O parâmetro de linha de comando / FEATURES usa featureids da tabela abaixo. O instalador silencioso permite que você instale o IDE principal e qualquer uma das seguintes plataformas e recursos (a disponibilidade dos recursos depende também da licença que está sendo passada como parâmetro). Estes são os IDs dos recursos disponíveis:
Feature ID
Description
delphi
Installs all Delphi platforms
delphi_windows
Installs Delphi Windows platform
delphi_macos
Installs Delphi macOS platform
delphi_linux
Installs Delphi Linux platform
delphi_ios
Installs Delphi iOS platform
delphi_android
Installs Delphi Android platform
cbuilder
Installs all C++ Builder platforms
cbuilder_windows
Installs C++ Builder Windows platform
cbuilder_ios
Installs C++ Builder iOS platform
cbuilder_android
Installs C++ Builder Android platform
french
Installs French language pack
german
Installs German language pack
japanese
Installs Japanese language pack
samples
Installs Samples
help
Installs Help files
teechart
Installs TeeChart components
dunit
Installs DUnit components
interbase_express
Installs InterBase Express components
interbase_2020
Installs InterBase 2020
openjdk
Installs AdoptOpenJDK
android_sdk
Installs AndroidSDK
Aqui está um exemplo de linha de comando para instalar silenciosamente o programa e instalar todas as plataformas Delphi e C ++ Builder (o arquivo de licença deve ser instalado antes de executar isso):
RAD Studio 10.4.2 поддерживает автоматическую установку продукта в автоматическом режиме без взаимодействия с пользовательским интерфейсом. Автоматическая установка доступна для автономной и онлайн-установки. В автономной установке есть файл GOF и исполняемый файл установки. В онлайн-установке есть исполняемый файл установки и автоматическая загрузка необходимых пакетов в фоновом режиме. В настоящее время установка Windows SDK начинается с пользовательского интерфейса и не соблюдается правило «тихой установки». Кроме того, в обоих сценариях (тихом и очень тихом) программа установки без вывода сообщений все равно будет просить вас подтвердить лицензионное соглашение. Нажмите Y (для Да) в самом начале процесса, он не будет запрашивать дальнейшее взаимодействие после этого первоначального ввода.
Посмотрите, как можно настроить автоматическую установку с параметрами командной строки ниже.
Параметры командной строки установщика
Автоматическая установка может быть активирована путем передачи дополнительных необязательных параметров командной строки программе установки (если вы не передадите какой-либо параметр, будет выполнена обычная установка). Параметры командной строки программы установки, необходимые для режима автоматической установки, следующие:
Command
Description
/SILENT
Runs the installer in silent mode. The progress window is displayed.
/VERYSILENT
Runs the installer in very silent mode. No windows are displayed.
/SUPRESSMSGBOXES
Suppresses messagge boxes. This has an effect only when combined with/SILENT and/VERYSILENT.
/NOCANCEL
Disables canceling the installation process.
/NORESTART
Prevents the installer from restarting the system even if it is necessary.
/DIR="x:dirpath"
Overrides the default install directory.
/SLIPFILE="x:filepath"
Installs a license file.
/FEATURES=featureid
Indicates the feature(s) to install, separated with » ; «. See the list below for the available features’ names.
/LOG="x:filepath"
Causes setup to create a log file for debugging the installation process. If the file cannot be created, Setup will abort with an error message.
Примечание: файл лицензии должен быть установлен на целевой машине перед установкой RAD Studio без вывода сообщений на экран, или вы можете установить его, используя /SLIPFILE опцию.
Основные устанавливаемые функции
Параметр командной строки / FEATURES берет идентификаторы функций из приведенной ниже таблицы. Программа установки без вывода сообщений позволяет установить базовую среду IDE и любую из следующих платформ и функций (доступность функций также зависит от лицензии, передаваемой в качестве параметра). Это идентификаторы доступных функций:
Feature ID
Description
delphi
Installs all Delphi platforms
delphi_windows
Installs Delphi Windows platform
delphi_macos
Installs Delphi macOS platform
delphi_linux
Installs Delphi Linux platform
delphi_ios
Installs Delphi iOS platform
delphi_android
Installs Delphi Android platform
cbuilder
Installs all C++ Builder platforms
cbuilder_windows
Installs C++ Builder Windows platform
cbuilder_ios
Installs C++ Builder iOS platform
cbuilder_android
Installs C++ Builder Android platform
french
Installs French language pack
german
Installs German language pack
japanese
Installs Japanese language pack
samples
Installs Samples
help
Installs Help files
teechart
Installs TeeChart components
dunit
Installs DUnit components
interbase_express
Installs InterBase Express components
interbase_2020
Installs InterBase 2020
openjdk
Installs AdoptOpenJDK
android_sdk
Installs AndroidSDK
Вот пример командной строки для автоматической установки программы и установки всех платформ Delphi и C ++ Builder (файл лицензии должен быть установлен перед запуском):
MySQL Backupは、Androidデバイスからインターネット経由でリモートのMySQLデータベースをバックアップ/復元できるアプリで、Delphiによって構築されています。Google Playではなく、SlideMeというサードパーティAndroidストアから入手できるようです。MySQL Serverのバージョンは、6.0、5.6、5.5、5.1、5.0、4.1、4.0、3.23に対応しています。
Wie verhalten sich Delphi, WPF .NET Framework und Electron im Vergleich zueinander und wie lässt sich ein objektiver Vergleich am besten durchführen? Embarcadero gab ein Whitepaper in Auftrag , um die Unterschiede zwischen Delphi, WPF .NET Framework und Electron beim Erstellen von Windows-Desktopanwendungen zu untersuchen. Die Benchmark-Anwendung – ein Windows 10 Calculator-Klon – wurde in jedem Framework von drei freiwilligen Mitarbeitern von Delphi Most Valuable Professionals (MVPs), einem freiberuflichen WPF-Experten und einem freiberuflichen Electron-Entwickler neu erstellt. In diesem Blog-Beitrag werden wir die Datenbankzugriffsmetrik untersuchen, die Teil des im Whitepaper verwendeten Flexibilitätsvergleichs ist. Der Taschenrechner selbst verwendet keine Datenbank, daher beziehen sich die Bewertungen hier im Allgemeinen auf die Frameworks selbst.
Datenbankzugriff
Enthält das Framework native Bibliotheken, die den Datenbankzugriff unterstützen? Die Datenpersistenz ist für viele Anwendungen von entscheidender Bedeutung und muss benutzerfreundlich und in jedes gute Entwicklungsframework integriert sein.
Der Hauptvorteil von Delphi gegenüber WPF und Electron besteht darin, dass das FMX-Framework einen Quellcode als Binärdatei für alle wichtigen Desktop- oder Mobilplattformen bereitstellen kann, wodurch die Reichweite eines Unternehmens für Kunden maximiert und die Duplizierung von Code sowie die Probleme bei Wartung und Upgrade minimiert werden. Es kann Projekte jeder Größe unterstützen, von Logik-Controllern für die industrielle Automatisierung bis hin zur weltweiten Bestandsverwaltung, und für jede Ebene vom datenbankintensiven Back-End bis zur GUI-Client-Seite einer Anwendung entwickelt werden. Schließlich bieten die Standardbibliotheken von Delphi einfachen Zugriff auf nahezu alle verfügbaren Datenbanktypen und ermöglichen Entwicklern den Zugriff auf Betriebssystemfunktionen auf jeder Plattform sowie die Interaktion mit E / A-Geräten und Hardwaresensoren.
WPF mit .NET Framework zielt direkt auf Windows-Computer ab. Das Framework ist in erster Linie auf clientseitige Desktopanwendungen ausgerichtet, kann jedoch Geschäftslogik in C # für Middle-Tier- oder Back-End-Funktionen integrieren und auf das ADO .NET Entity Framework für Datenbanken zugreifen. WPF kann über .NET-Bibliotheken auf Windows-Betriebssystemfunktionen und E / A-Geräte zugreifen, jedoch mit verwaltetem Code nach der Kompilierung anstelle von nativem Code.
Electron ist ein Open-Source-Framework, das über seine Chromium-Browser-Basis auf die drei wichtigsten Desktop-Betriebssysteme abzielt. Es konzentriert sich auf clientseitige Anwendungen, die normalerweise webzentriert sind, verwendet jedoch node.js für Middle-Tier- und Back-End-Dienste. Electron bietet Hardwarezugriff über den Prozess node.js und kann über die Bibliotheken node.js auf einige, aber nicht alle Betriebssystemfunktionen zugreifen.
Werfen wir einen Blick auf jedes Framework.
Delphi
Delphi wird mit mehreren Datenbankbibliotheken geliefert, die mit nahezu jedem Datenbanktyp auf dem Markt verbunden sind. Datenbankzugriff, Abfragen und Datenanzeige werden reibungslos über Komponenten integriert, auf die in der kostenlosen Community Edition und auf der ersten kommerziellen Lizenzstufe zugegriffen werden kann. Während Delphi und WPF im Whitepaper ähnlich abschnitten, wird Delphi mit einer stärker integrierten Toolchain und besser unterstützten Datenbanken ausgeliefert.
FireDAC ist eine universelle Datenzugriffsbibliothek zum Entwickeln von Anwendungen für mehrere Geräte, die mit Unternehmensdatenbanken verbunden sind. Mit seiner leistungsstarken universellen Architektur ermöglicht FireDAC den nativen direkten Hochgeschwindigkeitszugriff von Delphi und C ++ Builder auf InterBase, SQLite, MySQL, SQL Server, Oracle, PostgreSQL, DB2, SQL Anywhere, Advantage DB, Firebird, Access, Informix und DataSnap und mehr, einschließlich der NoSQL-Datenbank MongoDB.
FireDAC ist eine leistungsstarke und dennoch benutzerfreundliche Zugriffsebene, die den Datenzugriff unterstützt, abstrahiert und vereinfacht und alle Funktionen bietet, die zum Erstellen realer Hochlastanwendungen erforderlich sind. FireDAC bietet eine gemeinsame API für den Zugriff auf verschiedene Datenbank-Backends, ohne den Zugriff auf eindeutige datenbankspezifische Funktionen aufzugeben und ohne die Leistung zu beeinträchtigen. Verwenden Sie FireDAC in Android-, iOS-, Windows- und Mac OS X-Anwendungen, die Sie für PCs, Tablets und Smartphones entwickeln.
Unten finden Sie eine Liste aller von RAD Studio unterstützten FireDAC-Datenbanken. Die Liste enthält die minimale und maximale Version, die in jeder Version von RAD Studio unterstützt wird.
Zusätzliche Datenbankverbindungen, die mit dem FireDAC ODBC Bridge-Treiber getestet wurden:
Database
Version
SAP Adaptive Server Enterprise
v 15.0
IBM DB2 AS/400
n/a
QuickBooks
v 16.0
InterSystems Cache
2014
Pervasive SQL
v 10.0
DBase
n/a
Excel
n/a
MicroFocus Cobol
n/a
Ingres Database
n/a
SAP MaxDB
n/a
Clarion
n/a
SolidDB
n/a
Unify SQLBase
n/a
Zusätzlich zu FireDAC verfügen Delphi und RAD Studio über ein umfangreiches Ökosystem von Drittanbietern, das viele verschiedene kommerzielle und Open-Source-Datenbankzugriffslösungen bietet. Sie können sogar über Lösungen von Drittanbietern wie CrossTalk von ATOZED Software auf .NET-Bibliotheken von Delphi und C ++ zugreifen .
WPF .NET Framework
WPF wird mit Zugriff auf Datenbankbibliotheken geliefert, einschließlich ADO .NET Entity Framework, die Datenbankverbindungen, Abfragen und Einträge über C # -Code ermöglichen. Laut Microsoft .NET Framework werden nur folgende Datenanbieter ( Quelle ) ausgeliefert:
.NET Framework-Datenprovider für SQL Server
.NET Framework-Datenprovider für OLE DB
.NET Framework-Datenprovider für ODBC
.NET Framework Data Provider für Oracle
.NET Framework-Datenprovider für SQL Server Compact 4.0
WPF .NET Framework erhielt im Whitepaper für den Datenbankzugriff aufgrund der ODBC-Unterstützung eine hohe Punktzahl. Andere Datenanbieter sind von Dritten erhältlich. Es ist jedoch Zeit erforderlich, um jede benötigte Datenbankbibliothek aufzuspüren, zu installieren und auf dem neuesten Stand zu halten.
Elektron
Electron enthält bei der Erstinstallation keine native Datenbankzugriffsbibliothek. Daher handelt es sich nicht um eine einzelne Paketinstallation, die alle für den Zugriff auf Datenbanken erforderlichen Funktionen enthält. Es kann über NodeJS auf Datenbanken zugreifen, und es stehen mehrere Open Source-Bibliotheken zur Verfügung, um Server- und serverlose Datenbanken, einschließlich JavaScript-Implementierungen, zu nutzen. Es ist jedoch Zeit erforderlich, um jede benötigte Datenbankbibliothek aufzuspüren, zu installieren und auf dem neuesten Stand zu halten.
Hier ist ein Beispiel für den Aufwand, der erforderlich ist, um eine Verbindung zu einer Oracle-Datenbank von NodeJS zur Verwendung in Electron herzustellen:
Alle drei Frameworks haben zumindest eine Möglichkeit, auf die meisten Datenbanken zuzugreifen. Delphi und RAD Studio werden jedoch mit den am meisten unterstützten Datenbanken aller drei Frameworks ausgeliefert. Da diese Datenbankzugriffskomponenten mit Delphi geliefert werden, ist keine zusätzliche Zeit erforderlich, um Bibliotheken von Drittanbietern aufzuspüren und zu warten. Delphi verfügt über ein umfangreiches Ökosystem an Datenbankzugriffskomponenten von Drittanbietern, deren Verwendung optional ist. WPF .NET Framework ist ein Legacy-FrameworkLaut Microsoft und nur mit rund 5 Datenanbietern ausgeliefert (obwohl Sie fairerweise über ODBC auf viele Datenbanken zugreifen können). Electron wird nicht mit Datenbankzugriffskomponenten geliefert, diese sind jedoch über das NodeJS-Ökosystem leicht verfügbar und erfordern zusätzlichen Aufwand bei der Suche und Wartung, wodurch die Lösung spröde wird. Insgesamt bietet Delphi eine flexiblere und integrierte Toolchain mit mehr Datenbanken, die sofort unterstützt werden als die beiden anderen Frameworks.
Entdecken Sie alle Metriken im Whitepaper „Ermitteln des besten Entwickler-Frameworks durch Benchmarking“:
¿Cómo funcionan Delphi, WPF .NET Framework y Electron en comparación entre sí, y cuál es la mejor manera de hacer una comparación objetiva? Embarcadero encargó un documento técnico para investigar las diferencias entre Delphi, WPF .NET Framework y Electron para crear aplicaciones de escritorio de Windows. La aplicación de referencia, un clon de la Calculadora de Windows 10, fue recreada en cada marco por tres voluntarios de Delphi Most Valuable Professionals (MVP), un desarrollador experto independiente de WPF y un desarrollador experto independiente Electron. En esta publicación de blog, vamos a explorar la métrica de acceso a la base de datos, que es parte de la comparación de flexibilidad utilizada en el documento técnico. La construcción de la calculadora en sí no utiliza una base de datos, por lo que las evaluaciones aquí son generalmente sobre los marcos en sí.
Acceso a la base de datos
¿El marco contiene bibliotecas nativas que admiten el acceso a la base de datos? La persistencia de datos es fundamental para muchas aplicaciones y debe ser fácil de usar e integrada con cualquier buen marco de desarrollo.
La principal ventaja de Delphi sobre WPF y Electron es que su marco FMX puede implementar un cuerpo de código fuente como binario en cualquier plataforma de escritorio o móvil importante, maximizando el alcance de una empresa a los clientes y minimizando la duplicación de código y los problemas de mantenimiento / actualización. Puede admitir proyectos de todos los tamaños, desde controladores lógicos para la automatización industrial hasta la gestión de inventario en todo el mundo, y se puede desarrollar para cada nivel, desde un back-end con una base de datos pesada hasta el lado del cliente GUI de una aplicación. Finalmente, las bibliotecas estándar de Delphi brindan un fácil acceso a casi todos los tipos de bases de datos disponibles y permiten a los desarrolladores acceder a la funcionalidad del sistema operativo en cada plataforma, así como interactuar con dispositivos de E / S y sensores de hardware.
WPF con .NET Framework se dirige directamente a las computadoras con Windows. El marco está dirigido principalmente a aplicaciones de escritorio del lado del cliente, pero puede incorporar lógica empresarial en C # para funciones de nivel intermedio o back-end y acceder a ADO .NET Entity Framework para bases de datos. WPF puede acceder a la funcionalidad del sistema operativo Windows y a los dispositivos de E / S a través de bibliotecas .NET pero con código administrado después de la compilación en lugar de código nativo.
Electron es un marco de código abierto dirigido a los tres principales sistemas operativos de escritorio a través de su base de navegador Chromium. Se enfoca en aplicaciones del lado del cliente, generalmente centradas en la web, pero usa node.js para servicios de nivel medio y back-end. Electron proporciona acceso al hardware desde su proceso node.js y puede acceder a algunas, pero no a todas, las funciones del sistema operativo a través de las bibliotecas de node.js.
Echemos un vistazo a cada marco.
Delphi
Delphi se envía con múltiples bibliotecas de bases de datos que se conectan a casi todos los tipos de bases de datos del mercado. El acceso a la base de datos, las consultas y la visualización de datos se integran sin problemas a través de componentes accesibles en la Community Edition gratuita y en el primer nivel de licencia comercial. Si bien Delphi y WPF obtuvieron una puntuación similar en el documento técnico, Delphi se envía con una cadena de herramientas más integrada y bases de datos más compatibles.
FireDAC es una biblioteca de acceso universal a datos para desarrollar aplicaciones para múltiples dispositivos, conectados a bases de datos empresariales. Con su potente arquitectura universal, FireDAC permite el acceso directo nativo de alta velocidad desde Delphi y C ++ Builder a InterBase, SQLite, MySQL, SQL Server, Oracle, PostgreSQL, DB2, SQL Anywhere, Advantage DB, Firebird, Access, Informix, DataSnap y más, incluida la base de datos NoSQL MongoDB.
FireDAC es una capa de acceso poderosa pero fácil de usar que admite, abstrae y simplifica el acceso a los datos, proporcionando todas las funciones necesarias para crear aplicaciones de alta carga del mundo real. FireDAC proporciona una API común para acceder a diferentes back-end de bases de datos, sin ceder el acceso a funciones únicas específicas de bases de datos y sin comprometer el rendimiento. Utilice FireDAC en las aplicaciones de Android, iOS, Windows y Mac OS X que esté desarrollando para PC, tabletas y teléfonos inteligentes.
A continuación, puede encontrar la lista de todas las bases de datos FireDAC compatibles con RAD Studio. La lista incluye la versión mínima y máxima admitida en cada versión de RAD Studio.
Conexiones de base de datos adicionales, probadas con el controlador FireDAC ODBC Bridge:
Database
Version
SAP Adaptive Server Enterprise
v 15.0
IBM DB2 AS/400
n/a
QuickBooks
v 16.0
InterSystems Cache
2014
Pervasive SQL
v 10.0
DBase
n/a
Excel
n/a
MicroFocus Cobol
n/a
Ingres Database
n/a
SAP MaxDB
n/a
Clarion
n/a
SolidDB
n/a
Unify SQLBase
n/a
Además de FireDAC, Delphi y RAD Studio tienen un rico ecosistema de terceros que proporciona muchas soluciones de acceso a bases de datos comerciales y de código abierto diferentes. Incluso puede acceder a bibliotecas .NET desde Delphi y C ++ a través de soluciones de terceros como CrossTalk de ATOZED Software .
WPF .NET Framework
WPF incluye acceso a bibliotecas de bases de datos, incluido ADO .NET Entity Framework, que permiten conexiones, consultas y entradas de bases de datos a través del código C #. Según Microsoft .NET Framework solo se envía con los siguientes proveedores de datos ( fuente ):
Proveedor de datos de .NET Framework para SQL Server
Proveedor de datos de .NET Framework para OLE DB
Proveedor de datos de .NET Framework para ODBC
Proveedor de datos de .NET Framework para Oracle
Proveedor de datos de .NET Framework para SQL Server Compact 4.0
WPF .NET Framework recibió una puntuación alta en el documento técnico para el acceso a la base de datos debido a la compatibilidad con ODBC. Otros proveedores de datos están disponibles de terceros. Sin embargo, se requiere tiempo para rastrear cada biblioteca de base de datos diferente necesaria, instalarla y mantenerla actualizada.
Electrón
Electron no incluye una biblioteca de acceso a la base de datos nativa cuando se instala por primera vez. Por lo tanto, no se trata de una instalación de paquete singular que contenga todas las funciones necesarias para acceder a las bases de datos. Puede acceder a las bases de datos a través de NodeJS y hay varias bibliotecas de código abierto disponibles para aprovechar las bases de datos de servidor y sin servidor, incluidas las implementaciones de JavaScript. Sin embargo, se requiere tiempo para rastrear cada biblioteca de base de datos diferente necesaria, instalarla y mantenerla actualizada.
A continuación, se muestra un ejemplo del esfuerzo necesario para conectarse a una base de datos Oracle desde NodeJS para su uso en Electron:
Los tres marcos tienen al menos alguna forma de acceder a la mayoría de las bases de datos. Sin embargo, Delphi y RAD Studio se envían con las bases de datos más compatibles de los tres marcos. Además, debido a que estos componentes de acceso a la base de datos vienen con Delphi, no se necesita tiempo adicional para rastrear y mantener bibliotecas de terceros. Delphi tiene un rico ecosistema de componentes de acceso a bases de datos de terceros que son opcionales de usar. WPF .NET Framework es un marco heredadosegún Microsoft y solo se envía con alrededor de 5 proveedores de datos (aunque para ser justos, puede acceder a muchas bases de datos a través de ODBC). Electron no se envía con ningún componente de acceso a la base de datos, pero están fácilmente disponibles a través del ecosistema NodeJS y requieren un esfuerzo adicional para encontrar y mantener, lo que hace que la solución sea frágil. En general, Delphi proporciona una cadena de herramientas más flexible e integrada con más bases de datos compatibles que los otros dos marcos.
Explore todas las métricas en el documento técnico “Descubriendo el mejor marco para desarrolladores a través de la evaluación comparativa”:
Qual é o desempenho do Delphi, do WPF .NET Framework e do Electron em comparação entre si, e qual é a melhor maneira de fazer uma comparação objetiva? A Embarcadero encomendou um white paper para investigar as diferenças entre Delphi, WPF .NET Framework e Electron para a construção de aplicativos de desktop do Windows. O aplicativo de benchmark – um clone da Calculadora do Windows 10 – foi recriado em cada estrutura por três voluntários Delphi Most Valuable Professionals (MVPs), um desenvolvedor WPF freelance especializado e um desenvolvedor freelance especializado em Electron. Nesta postagem do blog, vamos explorar a métrica de acesso ao banco de dados, que faz parte da comparação de flexibilidade usada no white paper. A calculadora em si não usa um banco de dados, portanto, as avaliações aqui são geralmente sobre as próprias estruturas.
Acesso ao banco de dados
A estrutura contém bibliotecas nativas com suporte para acesso ao banco de dados? A persistência de dados é crítica para muitos aplicativos e deve ser amigável e integrada com qualquer boa estrutura de desenvolvimento.
A principal vantagem da Delphi sobre o WPF e o Electron é que sua estrutura FMX pode implantar um corpo de código-fonte como um binário para qualquer grande desktop ou plataforma móvel, maximizando o alcance de uma empresa para os clientes e minimizando a duplicação de código e dores de cabeça de manutenção / atualização. Ele pode oferecer suporte a projetos de todos os tamanhos, desde controladores lógicos para automação industrial até gerenciamento de inventário em todo o mundo, e ser desenvolvido para cada camada, desde um back-end pesado de banco de dados até o lado do cliente GUI de um aplicativo. Finalmente, as bibliotecas padrão da Delphi fornecem acesso fácil a quase todos os tipos de banco de dados disponíveis e permitem que os desenvolvedores acessem a funcionalidade do sistema operacional em todas as plataformas, bem como interajam com dispositivos de E / S e sensores de hardware.
O WPF com .NET Framework visa computadores Windows diretamente. A estrutura é principalmente voltada para aplicativos de desktop do lado do cliente, mas pode incorporar lógica de negócios em C # para funções de camada intermediária ou back-end e acessar o ADO .NET Entity Framework para bancos de dados. O WPF pode acessar a funcionalidade do sistema operacional Windows e dispositivos de E / S por meio de bibliotecas .NET, mas com código gerenciado após a compilação, em vez de código nativo.
Electron é uma estrutura de código aberto voltada para os três principais sistemas operacionais de desktop por meio de sua base de navegador Chromium. Ele se concentra em aplicativos do lado do cliente, normalmente centrados na web, mas usa node.js para serviços de camada intermediária e back-end. Electron fornece acesso ao hardware de seu processo node.js e pode acessar algumas, mas não todas as funções do sistema operacional por meio de bibliotecas node.js.
Vamos dar uma olhada em cada estrutura.
Delphi
O Delphi vem com várias bibliotecas de banco de dados que se conectam a quase todos os tipos de banco de dados do mercado. O acesso ao banco de dados, as consultas e a exibição de dados são perfeitamente integrados por meio de componentes acessíveis na Community Edition gratuita e na primeira camada de licença comercial. Enquanto Delphi e WPF pontuaram de forma semelhante no white paper, o Delphi vem com uma cadeia de ferramentas mais integrada e mais bancos de dados suportados.
FireDAC é uma biblioteca de acesso universal a dados para o desenvolvimento de aplicativos para vários dispositivos, conectados a bancos de dados corporativos. Com sua poderosa arquitetura universal, FireDAC permite acesso direto de alta velocidade nativa de Delphi e C ++ Builder para InterBase, SQLite, MySQL, SQL Server, Oracle, PostgreSQL, DB2, SQL Anywhere, Advantage DB, Firebird, Access, Informix, DataSnap e mais, incluindo o banco de dados NoSQL MongoDB.
FireDAC é uma camada de acesso poderosa, mas fácil de usar, que oferece suporte, abstrai e simplifica o acesso aos dados, fornecendo todos os recursos necessários para construir aplicativos de alta carga do mundo real. FireDAC fornece uma API comum para acessar diferentes back-ends de banco de dados, sem abrir mão do acesso a recursos exclusivos e específicos do banco de dados e sem comprometer o desempenho. Use FireDAC em aplicativos Android, iOS, Windows e Mac OS X que você está desenvolvendo para PCs, tablets e smartphones.
Abaixo você pode encontrar a lista de todos os bancos de dados FireDAC suportados pelo RAD Studio. A lista inclui a versão mínima e máxima com suporte em cada versão do RAD Studio.
Conexões de banco de dados adicionais, testadas usando o driver FireDAC ODBC Bridge:
Database
Version
SAP Adaptive Server Enterprise
v 15.0
IBM DB2 AS/400
n/a
QuickBooks
v 16.0
InterSystems Cache
2014
Pervasive SQL
v 10.0
DBase
n/a
Excel
n/a
MicroFocus Cobol
n/a
Ingres Database
n/a
SAP MaxDB
n/a
Clarion
n/a
SolidDB
n/a
Unify SQLBase
n/a
Além do FireDAC, Delphi e RAD Studio têm um rico ecossistema de terceiros que oferece muitas soluções comerciais e de acesso a banco de dados de código aberto. Você pode até mesmo obter bibliotecas .NET de Delphi e C ++ por meio de soluções de terceiros como o CrossTalk da ATOZED Software .
WPF .NET Framework
O WPF vem com acesso a bibliotecas de banco de dados, incluindo ADO .NET Entity Framework, que permitem conexões, consultas e entradas de banco de dados por meio de código C #. De acordo com o Microsoft .NET Framework, é fornecido apenas com os seguintes provedores de dados ( fonte ):
Provedor de dados .NET Framework para SQL Server
Provedor de dados .NET Framework para OLE DB
Provedor de dados .NET Framework para ODBC
Provedor de dados .NET Framework para Oracle
Provedor de dados .NET Framework para SQL Server Compact 4.0
O WPF .NET Framework recebeu uma pontuação alta no white paper para acesso ao banco de dados devido ao suporte ODBC. Outros provedores de dados estão disponíveis de terceiros. No entanto, é necessário tempo para rastrear cada biblioteca de banco de dados diferente necessária, instalá-la e mantê-la atualizada.
Elétron
O Electron não inclui uma biblioteca nativa de acesso ao banco de dados quando é instalado pela primeira vez. Portanto, não é uma instalação de pacote única contendo todas as funcionalidades necessárias para acessar bancos de dados. Ele pode acessar bancos de dados por meio de NodeJS e várias bibliotecas de código aberto estão disponíveis para controlar bancos de dados de servidor e sem servidor, incluindo implementações de JavaScript. No entanto, é necessário tempo para rastrear cada biblioteca de banco de dados diferente necessária, instalá-la e mantê-la atualizada.
Aqui está um exemplo do esforço necessário para se conectar a um banco de dados Oracle do NodeJS para uso no Electron:
Todas as três estruturas têm pelo menos alguma maneira de acessar a maioria dos bancos de dados. No entanto, Delphi e RAD Studio vêm com os bancos de dados mais suportados de todas as três estruturas. Além disso, como esses componentes de acesso ao banco de dados vêm com o Delphi, não há tempo extra necessário para rastrear e manter bibliotecas de terceiros. O Delphi possui um rico ecossistema de componentes de acesso a banco de dados de terceiros que são opcionais de usar. WPF .NET Framework é uma estrutura legadade acordo com a Microsoft e só vem com cerca de 5 provedores de dados (embora, para ser justo, você possa obter muitos bancos de dados por meio do ODBC). O Electron não é fornecido com nenhum componente de acesso ao banco de dados, mas eles estão prontamente disponíveis através do ecossistema NodeJS e requerem esforço extra para localizar e manter, o que torna a solução frágil. No geral, o Delphi fornece uma cadeia de ferramentas mais flexível e integrada com mais bancos de dados com suporte imediato do que as outras duas estruturas.
Explore todas as métricas no white paper “Descobrindo a melhor estrutura de desenvolvedor por meio de benchmarking”:
Как работают Delphi, WPF .NET Framework и Electron по сравнению друг с другом и как лучше всего провести объективное сравнение? Embarcadero заказал технический документ для исследования различий между Delphi, WPF .NET Framework и Electron для создания настольных приложений Windows. Тестовое приложение — клон калькулятора Windows 10 — было воссоздано в каждой структуре тремя волонтерами Delphi Most Valuable Professionals (MVP), одним экспертом-фрилансером WPF-разработчиком и одним экспертом-фрилансером Electron. В этом сообщении блога мы собираемся изучить метрику доступа к базе данных, которая является частью сравнения гибкости, используемого в техническом документе. Сама сборка калькулятора не использует базу данных, поэтому оценки здесь, как правило, относятся к самим фреймворкам.
Доступ к базе данных
Содержит ли фреймворк собственные библиотеки, поддерживающие доступ к базе данных? Сохранение данных имеет решающее значение для многих приложений и должно быть удобным для пользователя и интегрированным с любой хорошей средой разработки.
Основное преимущество Delphi перед WPF и Electron заключается в том, что его структура FMX может развертывать одну часть исходного кода в виде двоичного кода на любой основной настольной или мобильной платформе, максимально увеличивая доступ бизнеса к клиентам и сводя к минимуму дублирование кода и проблемы с обслуживанием / обновлением. Он может поддерживать проекты любого размера, от логических контроллеров для промышленной автоматизации до управления запасами по всему миру, и разрабатываться для каждого уровня — от серверной части с тяжелыми базами данных до клиентской части приложения с графическим интерфейсом. Наконец, стандартные библиотеки Delphi обеспечивают легкий доступ почти ко всем доступным типам баз данных и позволяют разработчикам получать доступ к функциям операционной системы на каждой платформе, а также взаимодействовать с устройствами ввода-вывода и аппаратными датчиками.
WPF с .NET Framework напрямую нацелен на компьютеры Windows. Платформа в первую очередь ориентирована на клиентские настольные приложения, но может включать бизнес-логику на C # для функций среднего или внутреннего уровня и доступа к ADO .NET Entity Framework для баз данных. WPF может получить доступ к функциям операционной системы Windows и устройствам ввода-вывода через библиотеки .NET, но с управляемым кодом после компиляции, а не с собственным кодом.
Electron — это платформа с открытым исходным кодом, нацеленная на три основные настольные операционные системы через базу браузера Chromium. Он ориентирован на клиентские приложения, обычно веб-ориентированные, но использует node.js для промежуточных и внутренних служб. Electron предоставляет аппаратный доступ из своего процесса node.js и может получить доступ к некоторым, но не всем функциям операционной системы через библиотеки node.js.
Давайте посмотрим на каждый фреймворк.
Delphi
Delphi поставляется с несколькими библиотеками баз данных, которые подключаются практически ко всем типам баз данных на рынке. Доступ к базе данных, запросы и отображение данных легко интегрируются с помощью компонентов, доступных в бесплатной версии Community Edition и на первом уровне коммерческой лицензии. В то время как Delphi и WPF получили одинаковые оценки в техническом документе, Delphi поставляется с более интегрированной цепочкой инструментов и большим количеством поддерживаемых баз данных.
FireDAC — это универсальная библиотека доступа к данным для разработки приложений для нескольких устройств, подключенных к корпоративным базам данных. Благодаря своей мощной универсальной архитектуре FireDAC обеспечивает собственный высокоскоростной прямой доступ из Delphi и C ++ Builder к InterBase, SQLite, MySQL, SQL Server, Oracle, PostgreSQL, DB2, SQL Anywhere, Advantage DB, Firebird, Access, Informix, DataSnap и многое другое, включая базу данных NoSQL MongoDB.
FireDAC — это мощный, но простой в использовании уровень доступа, который поддерживает, абстрагирует и упрощает доступ к данным, предоставляя все функции, необходимые для создания реальных приложений с высокой нагрузкой. FireDAC предоставляет общий API для доступа к различным серверным компонентам базы данных, не отказываясь от доступа к уникальным функциям, специфичным для базы данных, и без ущерба для производительности. Используйте FireDAC в приложениях Android, iOS, Windows и Mac OS X, которые вы разрабатываете для ПК, планшетов и смартфонов.
Ниже вы можете найти список всех баз данных FireDAC, поддерживаемых RAD Studio. Список включает минимальную и максимальную версию, поддерживаемую каждым выпуском RAD Studio.
Дополнительные подключения к базе данных, протестированные с использованием драйвера FireDAC ODBC Bridge:
Database
Version
SAP Adaptive Server Enterprise
v 15.0
IBM DB2 AS/400
n/a
QuickBooks
v 16.0
InterSystems Cache
2014
Pervasive SQL
v 10.0
DBase
n/a
Excel
n/a
MicroFocus Cobol
n/a
Ingres Database
n/a
SAP MaxDB
n/a
Clarion
n/a
SolidDB
n/a
Unify SQLBase
n/a
Помимо FireDAC, Delphi и RAD Studio имеют обширную стороннюю экосистему, которая предоставляет множество различных коммерческих решений для доступа к базам данных с открытым исходным кодом. Вы даже можете получить доступ к .NET-библиотекам из Delphi и C ++ с помощью сторонних решений, таких как CrossTalk от ATOZED Software .
WPF .NET Framework
WPF поставляется с доступом к библиотекам баз данных, включая ADO .NET Entity Framework, которые позволяют подключаться к базе данных, запросы и записи через код C #. Согласно Microsoft .NET Framework поставляется только со следующими поставщиками данных ( источник ):
Поставщик данных .NET Framework для SQL Server
Поставщик данных .NET Framework для OLE DB
Поставщик данных .NET Framework для ODBC
Поставщик данных .NET Framework для Oracle
Поставщик данных .NET Framework для SQL Server Compact 4.0
Платформа WPF .NET Framework получила высокую оценку в техническом описании доступа к базе данных из-за поддержки ODBC. Другие поставщики данных доступны у третьих лиц. Однако требуется время, чтобы отследить каждую необходимую библиотеку базы данных, установить ее и поддерживать в актуальном состоянии.
Электрон
При первой установке Electron не включает в себя собственную библиотеку доступа к базе данных. Поэтому это не единичный пакет, содержащий все функции, необходимые для доступа к базам данных. Он может получать доступ к базам данных через NodeJS, и доступны несколько библиотек с открытым исходным кодом для использования серверных и бессерверных баз данных, включая реализации JavaScript. Однако требуется время, чтобы отследить каждую необходимую библиотеку базы данных, установить ее и поддерживать в актуальном состоянии.
Вот пример усилий, необходимых для подключения к базе данных Oracle из NodeJS для использования в Electron:
У всех трех фреймворков есть хоть какой-то способ доступа к большинству баз данных. Однако Delphi и RAD Studio поставляются с большинством поддерживаемых баз данных из всех трех фреймворков. Кроме того, поскольку эти компоненты доступа к базе данных поставляются с Delphi, нет необходимости в дополнительном времени для отслеживания и поддержки сторонних библиотек. У Delphi действительно есть богатая экосистема сторонних компонентов доступа к базе данных, которые необязательно использовать. WPF .NET Framework — это устаревшая платформасогласно Microsoft и поставляется только с 5 поставщиками данных (хотя, честно говоря, вы можете получить доступ ко многим базам данных через ODBC). Electron не поставляется с какими-либо компонентами доступа к базе данных, но они легко доступны через экосистему NodeJS и требуют дополнительных усилий для поиска и поддержки, что делает решение хрупким. В целом, Delphi предоставляет более гибкий и интегрированный набор инструментов с большим количеством поддерживаемых баз данных по сравнению с двумя другими фреймворками.
Изучите все показатели в техническом документе «Обнаружение лучшей среды разработки с помощью сравнительного анализа»:
すでに、10.4.2のトライアル版が利用可能になっており、今後製品を購入いただくと、10.4.2をダウンロードいただけるようになります。また、すでに製品をお持ちの方は、有効なアップデートサブスクリプションがあれば、既存のライセンスを使用してRAD Studio 10.4.2をご利用いただけます。10.4.2のダウンロードは、新しいカスタマーポータルサイト(my.embarcadero.com)から行えます。
エンバカデロでは、Delphi / C++Builder / RAD Studio 10.4.2のリリースを発表しました。新機能に加え、大幅な品質改善が加えられた新リリースは、10.4 Sydneyと10.4.1の品質向上にフォーカスしたリリースをベースとしています。
RAD Studio 10.4.2は、Windows開発からマルチデバイスサポート、IDEのモダナイズからライブラリ品質やコンパイラのパフォーマンス向上に至る、製品の基本機能の拡張の継続に重点を置いています。このブログ記事では、10.4.2の主要な新機能と拡張機能のいくつかにフォーカスして紹介します。
Microsoft Storeおよびエンタープライズ向け配置のための、Microsoftの新しいWindowsアプリケーションパッケージングフォーマットMSIXを、IDEでサポートしました。MSIXサポートには、従来Desktop Bridgeと呼ばれていたテクノロジーが組み込まれており、これはMicrosoft Project Reunionの柱のひとつです。
C++ RTLには、最新バージョンのDinkumware STLが同梱されています。さらに、いくつかの主要なオープンソースC++ライブラリもGetItから入手できるようになります。
品質
RAD Studio 10.4.2では、 PPL、HTTPおよびRESTクライアント、FireDAC、SOAPおよびWSDLインポーターなど、製品やライブラリ全般にわたる機能強化や品質改善を行っています。RAD Studio 10.4.2では、お客様から寄せられた報告を含む600以上の問題に対処しています。
今すぐ10.4.2を使い始めよう
すでに、10.4.2のトライアル版が利用可能になっており、今後製品を購入いただくと、10.4.2をダウンロードいただけるようになります。また、すでに製品をお持ちの方は、有効なアップデートサブスクリプションがあれば、既存のライセンスを使用してRAD Studio 10.4.2をご利用いただけます。10.4.2のダウンロードに関するご案内は、本日(2月25日)中にお送り致します。10.4.2のダウンロードは、新しいカスタマーポータルサイト(my.embarcadero.com)から行えます。
Estamos muito felizes em compartilhar que Delphi, C++ Builder e RAD Studio 10.4.2 foram lançados hoje.
RAD Studio 10.4 Sydney – Release 2 (também conhecido como 10.4.2) baseia-se no conjunto de recursos de 10.4 e na qualidade de 10.4.1, aprimorando os recursos existentes em todo o produto e adicionando novos recursos. O RAD Studio 10.4.2 continua a expandir os recursos do produto em algumas direções principais:
O #1 em Desenvolvimento de Aplicativos Windows, com novos controles VCL Windows nativos (TControlList e TNumberBox) para ajudar os clientes a modernizar a IU de aplicativos, pacote de aplicativos MSIX para distribuição via Microsoft Store e uma atualização de componentes Konopka (um de nossos pacotes bônus). Também estamos trazendo o TEdgeBrowser VCL aprimorado, suportando o navegador Edge baseado em Chromium da Microsoft e ajudando a entregar conteúdo HTML em aplicativos de desktop.
Novos recursos de produtividade do desenvolvedor e experiência do usuário, incluindo extensões significativas para Code Insight com base no mecanismo LSP para Delphi e C++, um novo estilo de IDE e muitos aprimoramentos para melhorar as atividades comuns do desenvolvedor. Também atualizamos nossa ferramenta de migração para facilitar a migração entre as versões e em breve lançaremos novos assistentes de aplicativos Low Code via GetIt.
Atualizações de plataformas no FireMonkey, com suporte total para Android 11, macOS 11 e iOS 14.
Novos recursos do Delphi e C ++, incluindo melhorias de desempenho do compilador Delphi, aprimoramentos do Linker C ++ Windows de 64 bits, tratamento de exceções C++ e melhorias no STL.
O RAD Studio 10.4.2 também oferece aprimoramentos adicionais e melhorias de qualidade em todo o produto e suas bibliotecas, com foco particular no PPL, HTTP e REST, FireDAC, SOAP e WSDL.
Incluídos no 10.4.2 também estão mais de 600 melhorias de qualidade para problemas relatados por nossos usuários.
Para todos os detalhes sobre as novidades do 10.4.2 visite:
¡La semana pasada tuvimos el RAD Studio Live en Español!
Este evento es una evolución/reemplazo de nuestro tradicional CodeRage, pero con un enfoque especialmente dedicado a RAD en su conjunto.
Los resultados fueron increíbles y a continuación puedes encontrar la lista de reproducción con todos los videos, en un total de 12 presentaciones inéditas:
Las muestras están disponibles a través de GitHub aquí:
Embarcadero freut sich, die Veröffentlichung von Delphi, C ++ Builder und RAD Studio 10.4.2 bekannt zu geben. Mit neuen Funktionen und einer deutlich verbesserten Qualität baut die neue Version auf der Arbeit in 10.4 Sydney und der 10.4.1-Qualitätsversion auf.
RAD Studio 10.4.2 erweitert weiterhin einige der wichtigsten Eckpfeiler des Produkts, von Windows bis zur Unterstützung mehrerer Geräte, von der IDE-Modernisierung bis zur Bibliotheksqualität und Compilerleistung. In diesem Blog-Beitrag möchten wir einige der wichtigsten neuen Funktionen und Verbesserungen in 10.4.2 hervorheben.
Best-In-Class-Windows-Anwendungsentwicklung
VCL und Windows bleiben eine Kernrichtung für das Produkt, und wir haben in 10.4.2 viele Verbesserungen in diesem Bereich vorgenommen, die von der 10.4-Arbeit fortfahren:
Ein neues flexibles und virtualisiertes Listensteuerelement namens TControlList . Dieses neue VCL-Steuerelement wurde als Hochleistungssteuerelement für sehr lange Listen entwickelt und bietet ein modernes Erscheinungsbild sowie benutzerdefinierte Konfigurationsoptionen für die Benutzeroberfläche, mit denen Steuerelemente in jedem Listenelement platziert werden können
Das zweite neue VCL-Steuerelement ist ein TNumberBox- Steuerelement, ein modern aussehendes numerisches Eingabesteuerelement . Das Steuerelement unterstützt die Eingabe von Ganzzahlen, Gleitkommazahlen mit einem bestimmten Satz von Dezimalstellen und korrekter Formatierung sowie Währungswerten und ermöglicht sogar die Auswertung von Ausdrücken
Integrierte IDE-Unterstützung für das neu empfohlene Windows-Anwendungspaketierungsformat MSIX von Microsoft für die Bereitstellung von Microsoft Store und Enterprise; Die MSIX-Unterstützung umfasst die Technologie, die zuvor als Desktop Bridge bekannt war, und ist eine der Säulen von Microsoft Project Reunion
Zahlreiche Verbesserungen und Aktualisierungen der Konopka Signature Visual Control-Bibliothek ( KSVC ) zur besseren Integration in VCL-Stile. Die neue Version von KSVC ist als kostenloses Addon für Kunden mit Update-Abonnement im GetIt Package Manager verfügbar
Die in 10.4 eingeführte TEdgeBrowser VCL-Komponente (ein Wrapper für das Windows 10 Chromium-basierte Edge WebView2-Steuerelement) wurde mit Unterstützung für die GA-Version des WebView2-Steuerelements und des SDK von Microsoft aktualisiert und bietet jetzt erweiterte Unterstützung für die Dateicache-Verwaltung
Neue Funktionen für Entwicklerproduktivität und Benutzererfahrung
Die IDE bleibt der zentrale Schwerpunkt für die Entwicklerproduktivität. Während unser Hauptaugenmerk auf der Fortsetzung des CodeInsight-Redesigns rund um die LSP-Technologie lag, wurden mehrere weitere Funktionen hinzugefügt, darunter:
Im Vergleich zu den Vorgängerversionen fügt LSP in 10.4.2 viele neue Funktionen für Error Insight hinzu : Der Editor zeigt jetzt farbige Unterstreichungen für Hinweise und Warnungen sowie Fehler an, sodass Sie im Code-Editor (sowohl Delphi als auch C ++) potenzielle wichtige Probleme sehen können )
Es gibt auch signifikante Verbesserungen bei der Code-Vervollständigung in der Verwendungsklausel, Verbesserungen bei der Parameter-Vervollständigung, Verbesserungen beim Verständnis des Codes bei gedrückter Strg-Klick-Navigation, einschließlich der Möglichkeit, bei gedrückter Strg-Taste auf das geerbte Schlüsselwort zu klicken, verbesserte Unterstützung für Pakete; und eine große Anzahl anderer Verbesserungen
Für C ++ wurden in LSP mehrere wichtige Qualitätsverbesserungen implementiert, die Probleme wie internationale Zeichen, Indizierung und mehr beheben
Ein neuer Stil namens Mountain Mist , der klassische IDE-Farben widerspiegelt
Viele Verbesserungen zur Verbesserung der allgemeinen Entwickleraktivitäten in der IDE
Verbesserte IDE-Reaktionsfähigkeit mit einem neuen Fortschrittsdialog, der zeigt, was die IDE während eines längeren Vorgangs tut, z. B. beim Öffnen einer großen Projektgruppe
Wir haben die Bibliothekspfadverwaltung aktualisiert und die Möglichkeit hinzugefügt, Pfade in und von absoluten Pfaden zu konvertieren, um Umgebungsvariablen im Pfad zu verwenden
Aktualisiertes Migrationstool mit einer erweiterten Liste von Einstellungen und 3 voreingestellten Konfigurationen zur Auswahl sowie der Option, zusätzliche Konfigurationsdateien einzuschließen
Neue Low Code App- Assistenten für FireMonkey: Mit diesen Assistenten, die in Kürze über GetIt für Abonnementkunden verfügbar sein werden, können RAD Studio-Entwickler schnell eine funktionierende Multiscreen-Anwendung von Grund auf neu erstellen, indem sie eine Reihe von Parametern über eine Assistentenoberfläche angeben
RAD Studio 10.4.2 unterstützt unbeaufsichtigte , automatisierte Installationen von Delphi, C ++ Builder und RAD Studio ohne Interaktion mit der Benutzeroberfläche
Erweiterte Unterstützung für FireMonkey-Plattformen
Delphi 10.4.2 bietet Unterstützung für die Bereitstellung und das Debuggen auf Version 11 von Android sowie wesentliche Verbesserungen für die Bereitstellung im App Bundle-Format, die vom Google Play Store zusammen mit der 64-Bit-App-Unterstützung benötigt werden
Delphi-Entwickler können mit Intel-basierten 64-Bit-Anwendungen mithilfe des FireMonkey-Frameworks auf macOS 11 Big Sur abzielen, auf den macOS App Store abzielen oder ihre macOS-Apps lokal oder über ihre eigene Website verteilen
RAD Studio 10.4.2 bietet Unterstützung für das Erstellen von iOS 14 App Store- fähigen Anwendungen in Delphi und C ++, für das iOS 14 SDK und für das Debuggen auf iOS 14-Geräten
Neue Delphi- und C ++ – Funktionen
Leistungsverbesserungen des Delphi-Compilers durch Implementierung von über 20 verschiedenen Compiler-Optimierungen, wobei die Kompilierungszeit für einige große Kundenanwendungen auf einen Bruchteil der früheren 10.4-Versionen reduziert wurde
C ++ Builder 10.4.2 führt eine signifikante Verbesserung der Speichernutzung im Win64-Linker ein , einschließlich einer neuen Technologie, um die Datenmenge, die der Linker verarbeiten muss, erheblich zu reduzieren. Dazu werden die Debug-Informationen in separate Dateien aufgeteilt (bekannt als „Split DWARF“ ).
In der neuen Version wurde das C ++ – Ausnahmebehandlungssystem sowohl innerhalb eines Moduls als auch innerhalb eines Moduls grundlegend überarbeitet . Dies umfasst C ++ – Sprachausnahmen, SEH- und Betriebssystemausnahmen
Die C ++ RTL enthält die neueste Version der Dinkumware STL, und in GetIt werden mehrere weitere wichtige Open Source C ++ – Bibliotheken verfügbar sein
Qualität
RAD Studio 10.4.2 bietet außerdem zusätzliche Verbesserungen und Qualitätsverbesserungen für das gesamte Produkt und seine Bibliotheken, wobei der Schwerpunkt auf PPL-, HTTP- und REST-Client-, FireDAC-, SOAP- und WSDL-Importeuren liegt.
Produkttests für 10.4.2 sind jetzt verfügbar und die aktualisierten Produktentwicklungen sind live im Online-Shop verfügbar. Kunden mit Update-Abonnement können RAD Studio 10.4.2 noch heute mit ihrer vorhandenen Lizenz herunterladen und installieren und erhalten eine E-Mail mit der Verfügbarkeit der neuen Version. Downloads können im Neukundenportal unter my.embarcadero.com heruntergeladen werden .
Embarcadero se complace en anunciar el lanzamiento de Delphi, C ++ Builder y RAD Studio 10.4.2. Con nuevas funciones y una calidad muy mejorada, la nueva versión se basa en el trabajo realizado en 10.4 Sydney y la versión de calidad 10.4.1.
RAD Studio 10.4.2 continúa expandiendo algunas de las piedras angulares clave del producto, desde Windows hasta la compatibilidad con múltiples dispositivos, desde la modernización de IDE hasta la calidad de las bibliotecas y el rendimiento del compilador. En esta publicación de blog, queremos resaltar algunas de las principales características nuevas y mejoras en 10.4.2.
El mejor desarrollo de aplicaciones de Windows de su clase
VCL y Windows siguen siendo una dirección central para el producto y hemos realizado muchas mejoras en este espacio en 10.4.2, continuando desde el trabajo de 10.4:
Un nuevo control de lista virtualizado y flexible, llamado TControlList . Este nuevo control VCL , diseñado como un control de alto rendimiento para listas muy largas, proporciona un aspecto moderno, completo con opciones de configuración de interfaz de usuario personalizadas que permiten colocar controles en cada elemento de la lista.
El segundo control VCL nuevo es un control TNumberBox , un control de entrada numérico de aspecto moderno . El control admite la entrada de números enteros, números de punto flotante con un conjunto dado de dígitos decimales y el formato adecuado, y valores de moneda, incluso permitiendo la evaluación de expresiones.
Soporte IDE integrado para el formato de empaquetado de aplicaciones de Windows recientemente recomendado por Microsoft, MSIX , para la implementación de Microsoft Store y Enterprise; El soporte de MSIX incorpora la tecnología anteriormente conocida como Desktop Bridge, y es uno de los pilares del Project Reunion de Microsoft.
Numerosas mejoras y actualizaciones a la biblioteca Konopka Signature Visual Control ( KSVC ) para una mejor integración con los estilos VCL. La nueva versión de KSVC está disponible como un complemento gratuito para los clientes de suscripción de actualización en GetIt Package Manager
El componente TEdgeBrowser VCL introducido en 10.4 (una envoltura alrededor del control Edge WebView2 basado en Windows 10 Chromium) se ha actualizado con soporte para la versión GA del control WebView2 de Microsoft y su SDK y ahora ofrece soporte mejorado para la administración de caché de archivos.
Nuevas funciones de productividad y experiencia del usuario para desarrolladores
El IDE sigue siendo el enfoque central para la productividad de los desarrolladores y, si bien nuestro enfoque principal era continuar con el rediseño de CodeInsight en torno a la tecnología LSP, se han agregado varias otras características, que incluyen:
En comparación con las versiones anteriores, en 10.4.2 LSP agrega muchas características nuevas para Error Insight : el editor ahora muestra subrayados de colores para sugerencias y advertencias, así como errores, lo que significa que puede ver posibles problemas importantes en el editor de código (tanto Delphi como C ++ )
También hay mejoras significativas en la finalización del código en la cláusula de usos, mejoras en la finalización de parámetros, mejoras en la comprensión del código en la navegación ctrl-clic, incluida la posibilidad de hacer ctrl-clic en la palabra clave heredada, soporte mejorado para paquetes; y una gran cantidad de otras mejoras
Para C ++ , se han implementado varias mejoras de calidad importantes en LSP, abordando problemas como caracteres internacionales, indexación y más.
Un nuevo estilo llamado Mountain Mist , que se hace eco de los colores IDE clásicos
Muchas mejoras para mejorar las actividades comunes de los desarrolladores en el IDE
Capacidad de respuesta del IDE mejorada , con un nuevo cuadro de diálogo de progreso que muestra lo que está haciendo el IDE durante una operación prolongada, como abrir un grupo de proyectos grande
Hemos actualizado la administración de rutas de la biblioteca y agregamos la capacidad de convertir rutas hacia y desde rutas absolutas para usar variables de entorno en la ruta.
Herramienta de migración actualizada con una lista ampliada de configuraciones y 3 configuraciones preestablecidas para elegir, además de la opción de incluir archivos de configuración adicionales
Nuevos asistentes de aplicaciones Low Code para FireMonkey: estos asistentes, que pronto estarán disponibles a través de GetIt para los clientes de suscripción, permiten a los desarrolladores de RAD Studio crear rápidamente una aplicación funcional multipantalla desde cero, especificando una serie de parámetros a través de una interfaz de asistente.
RAD Studio 10.4.2 admite instalaciones silenciosas y automatizadas de Delphi, C ++ Builder y RAD Studio sin interacción con la interfaz de usuario
Soporte ampliado de plataformas FireMonkey
Delphi 10.4.2 incluye soporte para la implementación y depuración en la versión 11 de Android y mejoras significativas para la implementación en el formato App Bundle, requerido por la Play Store de Google junto con el soporte de la aplicación de 64 bits.
Los desarrolladores de Delphi pueden apuntar a macOS 11 Big Sur con aplicaciones de 64 bits basadas en Intel utilizando el marco FireMonkey, apuntando a la tienda de aplicaciones de macOS o distribuyendo sus aplicaciones de macOS localmente o a través de su propio sitio web.
RAD Studio 10.4.2 proporciona soporte para construir aplicaciones listas para iOS 14 App Store en Delphi y C ++, dirigidas al SDK de iOS 14 y depuración en dispositivos iOS 14
Nuevas funciones de Delphi y C ++
Mejoras en el rendimiento del compilador de Delphi obtenidas mediante la implementación de más de 20 optimizaciones de compilador diferentes, con el tiempo de compilación reducido a una fracción de lo que había en las versiones 10.4 anteriores para algunas aplicaciones de clientes grandes.
C ++ Builder 10.4.2 introduce una mejora significativa que aborda el uso de la memoria en el vinculador Win64 , incluida una nueva tecnología para reducir en gran medida la cantidad de datos que el vinculador necesita procesar. Lo hace dividiendo la información de depuración en archivos separados (conocido como ‘dividir DWARF’ )
La nueva versión ve una gran revisión del sistema de manejo de excepciones de C ++ , tanto dentro de un módulo como entre módulos; esto incluye excepciones del lenguaje C ++, SEH y excepciones del sistema operativo
C ++ RTL incluye la última versión de Dinkumware STL, y varias bibliotecas clave de C ++ de código abierto estarán disponibles en GetIt
Calidad
RAD Studio 10.4.2 también ofrece mejoras adicionales y mejoras de calidad en todo el producto y sus bibliotecas, con un enfoque particular en los importadores de PPL, HTTP y REST, FireDAC, SOAP y WSDL.
Las pruebas de productos para 10.4.2 ya están disponibles y las compilaciones de productos actualizadas están disponibles en la tienda en línea. Los clientes con suscripción de actualización pueden descargar e instalar RAD Studio 10.4.2 hoy con su licencia existente y recibirán un correo electrónico anunciando la disponibilidad de la nueva versión. Las descargas están disponibles para descargar en el portal de nuevos clientes en my.embarcadero.com .
A Embarcadero tem o prazer de anunciar o lançamento do Delphi, C ++ Builder e RAD Studio 10.4.2. Com novos recursos e qualidade muito aprimorada, a nova versão se baseia no trabalho realizado no 10.4 Sydney e na versão de qualidade 10.4.1.
O RAD Studio 10.4.2 continua expandindo alguns dos pilares principais do produto, do Windows ao suporte a vários dispositivos, da modernização do IDE à qualidade das bibliotecas e desempenho do compilador. Nesta postagem do blog, queremos destacar alguns dos principais novos recursos e aprimoramentos em 10.4.2.
Melhor desenvolvimento de aplicativos Windows da classe
VCL e Windows continuam sendo uma direção central para o produto e fizemos muitas melhorias neste espaço em 10.4.2, continuando a partir do trabalho 10.4:
Um novo controle de lista flexível e virtualizado, denominado TControlList . Este novo controle VCL , projetado como um controle de alto desempenho para listas muito longas, fornece uma aparência moderna, completo com opções de configuração de IU personalizadas, permitindo controles colocados em cada item da lista
O segundo novo controle VCL é um controle TNumberBox , um controle de entrada numérica de aparência moderna . O controle suporta a entrada de números inteiros, números de ponto flutuante com um determinado conjunto de dígitos decimais e formatação adequada e valores monetários, permitindo até mesmo a avaliação da expressão
Suporte IDE integrado para o formato de pacote de aplicativos do Windows recém-recomendado pela Microsoft , MSIX , para implantação da Microsoft Store e Enterprise; O suporte a MSIX incorpora a tecnologia anteriormente conhecida como Desktop Bridge e é um dos pilares do Project Reunion da Microsoft
Numerosos aprimoramentos e atualizações na biblioteca Konopka Signature Visual Control ( KSVC ) para melhor integração com estilos VCL. A nova versão do KSVC está disponível como um complemento gratuito para clientes com assinatura de atualização no Gerenciador de Pacotes GetIt
O componente TEdgeBrowser VCL introduzido em 10.4 (um wrapper em torno do controle Edge WebView2 baseado em Windows 10 Chromium) foi atualizado com suporte para a versão GA do controle WebView2 da Microsoft e seu SDK e agora oferece suporte aprimorado para gerenciamento de cache de arquivo
Novos recursos de produtividade do desenvolvedor e experiência do usuário
O IDE continua sendo o foco central para a produtividade do desenvolvedor e, embora nosso foco principal fosse continuar o redesenho do CodeInsight em torno da tecnologia LSP, vários outros recursos foram adicionados, incluindo:
Em comparação com as versões anteriores, em 10.4.2 LSP adiciona muitos novos recursos para Error Insight : o editor agora mostra sublinhados coloridos para dicas e avisos, bem como erros, o que significa que você pode ver possíveis problemas importantes no editor de código (Delphi e C ++ )
Há também melhorias significativas no autocompletar de código na cláusula de usos, melhorias no preenchimento de parâmetros, melhorias na compreensão do código da navegação ctrl-click, incluindo a capacidade de ctrl-click na palavra-chave herdada, melhor suporte para pacotes; e um grande número de outras melhorias
Para C ++ , várias melhorias de qualidade importantes foram implementadas no LSP, abordando questões como caracteres internacionais, indexação e muito mais
Um novo estilo chamado Mountain Mist , ecoando as cores clássicas do IDE
Muitas melhorias para melhorar as atividades comuns do desenvolvedor no IDE
Capacidade de resposta aprimorada do IDE , com uma nova caixa de diálogo de progresso que mostra o que o IDE está fazendo durante uma operação longa, como abrir um grande grupo de projetos
Atualizamos o gerenciamento do caminho da biblioteca e adicionamos a capacidade de converter caminhos para e de caminhos absolutos para usar variáveis de ambiente no caminho
Ferramenta de migração atualizada com uma lista estendida de configurações e 3 configurações predefinidas para escolher, além da opção de incluir arquivos de configuração adicionais
Novos assistentes de aplicativos de baixo código para FireMonkey: esses assistentes, que em breve estarão disponíveis via GetIt para clientes assinantes, permitem que os desenvolvedores do RAD Studio criem rapidamente um aplicativo funcional de várias telas do zero, especificando vários parâmetros por meio de uma interface de assistente
O RAD Studio 10.4.2 oferece suporte a instalações silenciosas e automatizadas de Delphi, C ++ Builder e RAD Studio sem interação com a interface do usuário
Suporte expandido de plataformas FireMonkey
Delphi 10.4.2 inclui suporte para implantação e depuração na versão 11 do Android e melhorias significativas para implantação no formato App Bundle, exigido pela Play Store do Google junto com o suporte de aplicativo de 64 bits
Os desenvolvedores Delphi podem direcionar o macOS 11 Big Sur com aplicativos de 64 bits baseados em Intel usando a estrutura FireMonkey, visando a macOS App Store ou distribuindo seus aplicativos macOS localmente ou por meio de seu próprio site
RAD Studio 10.4.2 fornece suporte para a construção de aplicativos prontos para iOS 14 App Store em Delphi e C ++, visando o iOS 14 SDK e depuração em dispositivos iOS 14
Novos recursos Delphi e C ++
Melhorias de desempenho do compilador Delphi obtidas através da implementação de mais de 20 otimizações de compilador diferentes, com tempo de compilação reduzido a uma fração do que era nas versões 10.4 anteriores para alguns grandes aplicativos de clientes
C ++ Builder 10.4.2 apresenta uma melhoria significativa abordando o uso de memória no vinculador Win64 , incluindo uma nova tecnologia para reduzir significativamente a quantidade de dados que o vinculador precisa processar. Ele faz isso dividindo as informações de depuração em arquivos separados (conhecido como ‘DWARF dividido’ )
A nova versão apresenta uma grande revisão do sistema de tratamento de exceções C ++ , tanto dentro de um módulo quanto entre módulos; isso inclui exceções da linguagem C ++, SEH e exceções do sistema operacional
O C ++ RTL inclui a versão mais recente do Dinkumware STL, e várias outras bibliotecas C ++ de código aberto serão disponibilizadas em GetIt
Qualidade
O RAD Studio 10.4.2 também oferece aprimoramentos adicionais e melhorias de qualidade em todo o produto e suas bibliotecas, com foco particular em clientes PPL, HTTP e REST, importadores FireDAC, SOAP e WSDL.
Testes de produto para 10.4.2 agora estão disponíveis e as compilações de produto atualizadas estão disponíveis na loja online. Os clientes com Assinatura de Atualização podem baixar e instalar o RAD Studio 10.4.2 hoje usando sua licença existente e receberão um e-mail anunciando a disponibilidade do novo lançamento. Os downloads estão disponíveis para download no portal de novos clientes em my.embarcadero.com .
Embarcadero рада объявить о выпуске Delphi, C ++ Builder и RAD Studio 10.4.2. Благодаря новым функциям и значительно улучшенному качеству новый выпуск основан на работе, проделанной в 10.4 Sydney и качественном выпуске 10.4.1.
RAD Studio 10.4.2 продолжает расширять некоторые ключевые элементы продукта, от Windows до поддержки нескольких устройств, от модернизации IDE до качества библиотек и производительности компилятора. В этом сообщении блога мы хотим выделить некоторые из основных новых функций и улучшений в 10.4.2.
Лучшая в своем классе разработка приложений для Windows
VCL и Windows остаются основным направлением продукта, и мы внесли много улучшений в эту область в 10.4.2, продолжая работу с 10.4:
Новый гибкий и виртуализированный список управления под названием TControlList . Этот новый элемент управления VCL , разработанный как высокопроизводительный элемент управления для очень длинных списков, обеспечивает современный внешний вид, в комплекте с настраиваемыми параметрами конфигурации пользовательского интерфейса, позволяющими размещать элементы управления в каждом элементе списка.
Второй новый элемент управления VCL — это элемент управления TNumberBox , современный элемент управления числовым вводом . Элемент управления поддерживает ввод целых чисел, чисел с плавающей запятой с заданным набором десятичных цифр и правильного форматирования, а также денежных значений, даже позволяя вычислять выражения.
Интегрированная поддержка IDE для недавно рекомендованного Microsoft формата упаковки приложений Windows, MSIX , для развертывания в Microsoft Store и Enterprise; Поддержка MSIX включает технологию, ранее известную как Desktop Bridge, и это одна из основ Microsoft Project Reunion.
Многочисленные улучшения и обновления библиотеки Konopka Signature Visual Control ( KSVC ) для лучшей интеграции со стилями VCL. Новая версия KSVC доступна в качестве бесплатного дополнения для пользователей подписки на обновления в диспетчере пакетов GetIt.
Компонент TEdgeBrowser VCL, представленный в 10.4 (оболочка вокруг элемента управления Edge WebView2 на основе Chromium в Windows 10), был обновлен с поддержкой версии GA элемента управления Microsoft WebView2 и его SDK, и теперь предлагает расширенную поддержку управления кешем файлов.
Новые возможности для повышения продуктивности разработчиков и взаимодействия с пользователем
IDE остается в центре внимания продуктивности разработчиков, и хотя мы сосредоточили свое внимание на продолжении редизайна CodeInsight с учетом технологии LSP, было добавлено несколько других функций, в том числе:
По сравнению с предыдущими версиями, в 10.4.2 LSP добавляет много новых функций для Error Insight : редактор теперь показывает цветные подчеркивания для подсказок и предупреждений, а также ошибок, что означает, что вы можете видеть потенциально важные проблемы в редакторе кода (как Delphi, так и C ++. )
Также были внесены значительные улучшения в автозавершение кода в разделе uses, улучшения в автозавершении параметров, улучшения понимания кода при навигации с нажатой клавишей Ctrl, включая возможность щелкнуть по унаследованному ключевому слову с нажатой клавишей Ctrl, улучшена поддержка пакетов; и большое количество других улучшений
Для C ++ в LSP было реализовано несколько важных улучшений качества, направленных на решение таких проблем, как международные символы, индексация и т. Д.
Новый стиль под названием Mountain Mist , перекликающийся с классическими цветами IDE.
Множество улучшений для улучшения общих действий разработчиков в среде IDE.
Улучшенная скорость отклика среды IDE с новым диалоговым окном хода выполнения, в котором показано, что делает среда IDE во время длительной операции, такой как открытие большой группы проектов.
Мы обновили управление путями к библиотеке и добавили возможность конвертировать пути в абсолютные пути и обратно для использования переменных среды в пути.
Обновленный инструмент миграции с расширенным списком настроек и 3 предустановленными конфигурациями на выбор, а также возможностью включения дополнительных файлов конфигурации.
Новые мастера приложений с низким кодом для FireMonkey: эти мастера, которые скоро будут доступны через GetIt для подписчиков, позволяют разработчикам RAD Studio быстро создавать функциональные многоэкранные приложения с нуля, задавая ряд параметров через интерфейс мастера.
RAD Studio 10.4.2 поддерживает автоматическую автоматическую установку Delphi, C ++ Builder и RAD Studio без взаимодействия с пользовательским интерфейсом.
Расширенная поддержка платформ FireMonkey
Delphi 10.4.2 включает поддержку развертывания и отладки в версии 11 Android и значительные улучшения для развертывания в формате App Bundle, необходимые для Google Play Store, а также поддержку 64-битных приложений.
Разработчики Delphi могут настроить таргетинг на macOS 11 Big Sur с помощью 64-битных приложений на базе Intel, используя фреймворк FireMonkey, ориентируясь на магазин приложений macOS или распространяя свои приложения macOS локально или через собственный веб-сайт.
RAD Studio 10.4.2 обеспечивает поддержку создания готовых приложений для iOS 14 App Store на Delphi и C ++, ориентированных на iOS 14 SDK, и отладку на устройствах iOS 14.
Новые возможности Delphi и C ++
Повышение производительности компилятора Delphi, полученное за счет реализации более 20 различных оптимизаций компилятора, при этом время компиляции сокращено до доли времени, которое было в предыдущих выпусках 10.4 для некоторых крупных клиентских приложений.
В C ++ Builder 10.4.2 значительно улучшено использование памяти в компоновщике Win64 , включая новую технологию, позволяющую значительно сократить объем данных, которые компоновщик должен обрабатывать. Он делает это, разделяя отладочную информацию на отдельные файлы (так называемый «разделенный DWARF» ).
В новом выпуске произошел значительный пересмотр системы обработки исключений C ++ как внутри модуля, так и между модулями; это включает исключения языка C ++, SEH и исключения ОС
C ++ RTL включает последнюю версию Dinkumware STL, и еще несколько ключевых библиотек C ++ с открытым исходным кодом будут доступны в GetIt.
Качество
RAD Studio 10.4.2 также предоставляет дополнительные улучшения и улучшения качества для всего продукта и его библиотек, уделяя особое внимание PPL, HTTP и REST-клиенту, средствам импорта FireDAC, SOAP и WSDL.
Теперь доступны пробные версии продукта для 10.4.2, а обновленные сборки продукта доступны в онлайн-магазине. Клиенты по подписке на обновления могут загрузить и установить RAD Studio 10.4.2 сегодня, используя свою существующую лицензию, и получат электронное письмо с объявлением о доступности новой версии. Загрузки доступны для загрузки на портале для новых клиентов my.embarcadero.com .
Для получения дополнительной информации вы можете проверить:
GaussProfitは、利益と価値の管理/最適化ソリューションであり、フォーチュン500の企業数社を含む500以上の企業で採用されているほか、ハードウェア評価(KVA – Knowledge value added)のために米陸海軍で使用されています。クライアント/サーバー型あるいはブラウザアプリの両方で利用できるアプリケーションは、Delphiを使って開発されています。開発者は、次のように解説しています。
Zu den neuen Funktionen in der kommenden Version von Delphi, C ++ Builder und RAD Studio 10.4.2 Embarcadero gehören zwei brandneue VCL-Steuerelemente, ein virtualisiertes Listensteuerelement und ein numerisches Eingabefeld.
Warnung: Dieser Blog-Beitrag behandelt ein unveröffentlichtes Produkt, das bis GA geändert werden kann.
Die Veröffentlichung von RAD Studio 10.4.2 rückt näher und Sie können morgen an unserem Vorschau-Webinar teilnehmen, siehe https://blogs.embarcadero.com/whats-coming-in-10-4-2-sydney/ . Dies sind einige Informationen vor der Veröffentlichung (oder Beta-Blogging) zu einem bestimmten Bereich, den neuen VCL-Steuerelementen.
Neues VCL TControlList-Steuerelement
Embarcadero führt in der VCL-Bibliothek ein neues flexibles und virtualisiertes Listensteuerelement ein. Die Idee hinter diesem Steuerelement ist es, ein neues, modern aussehendes VCL-Steuerelement anzubieten, das eine benutzerdefinierte UI-Konfiguration und ein Hochleistungssteuerelement bietet, das mit sehr langen Listen verwendet werden kann. Diese Liste stellt eine einzelne Auswahlliste dar und alle Elemente haben visuell die gleiche Höhe und Breite.
Mit dem neuen Steuerelement kann der Entwickler den Inhalt definieren, indem er eines der Elemente der Liste mithilfe grafischer Steuerelemente (dh TGraphicControl-Nachkommen) entwirft und dem Steuerelement Daten zur Anzeige einzelner Elemente bereitstellt, ohne alle Steuerelemente für alle Elemente zu erstellen Elemente in der Liste, aber nur diejenigen, die zum Anzeigen der Daten benötigt werden. Da die Liste vollständig virtuell ist, kann sie Tausende und sogar Millionen von Elementen verarbeiten und bietet ein extrem schnelles Scrollen. Neben der Berechnung und Anzeige nur der Elemente, die auf den Bildschirm passen, wird in der Liste der Inhalt der Elemente mithilfe von speicherinternen Bitmaps zwischengespeichert.
Das neue Steuerelement ähnelt dem klassischen TDBCtrlGrid-Steuerelement. Es gibt ein Bedienfeld für Steuerelemente, in das Sie Steuerelemente einfügen und zur Laufzeit virtuelle Elemente erstellen lassen. Im Gegensatz zu DBCtrlGrid können wir nur TGraphicControl darauf platzieren und alle Elemente sind virtuell. Unten sehen Sie das Steuerelement zur Entwurfszeit (wobei die Oberfläche eines einzelnen Elements zur Bearbeitung verfügbar ist) und zur Laufzeit (wobei derselbe Inhalt um ein Vielfaches multipliziert wird).
Diese Liste enthält keine Sammlung von Elementen mit bestimmten Informationen. Die Daten können entweder über Live-Bindungen (einschließlich der Bindung an einen Datensatz oder eine Sammlung von Objekten) oder über ein Ereignis bereitgestellt werden, um die Daten eines einzelnen Elements abzufragen (so dass die direkte Speicherung und Zuordnung vollständig dem Entwickler überlassen bleibt). . Für jedes Element, um das Steuerelement anzuzeigen, wird ein Ereignishandler aufgerufen, mit dem Sie das Erscheinungsbild jedes Elements anpassen können. In diesem Fall ändern Sie lediglich die Beschriftung des Etiketts:
Mit dem vorherigen Design, 10.000 Elementen und mehreren Spalten erzeugt dieser triviale Code eine Ausgabe wie folgt:
Zur Entwurfszeit gibt es einen speziellen Dialog mit einer Sammlung voreingestellter Konfigurationen, einschließlich der Anpassung der TControlList-Eigenschaften und der Steuerungssammlungen mit bestimmten Eigenschaften. Sie verwenden die Pfeile oben, um die Kernkonfiguration auszuwählen, und können sie mit einigen der anderen Kontrollkästchenoptionen unten optimieren. Der Assistent überschreibt die Einstellung der Kontrollliste.
Das von Ihnen entworfene Element wird (virtuell) für jedes der mit der ItemCount-Eigenschaft angeforderten Elemente repliziert. Die sichtbare Oberfläche des Steuerelements ermöglicht im Allgemeinen eine Reihe von Elementen mit derselben Breite und Höhe. Das Steuerelement verfügt über 3 verschiedene Layouts:
Einzel für eine einzelne Spalte von Elementen. In diesem Fall stimmt die Breite des Elements mit der Steuerelementbreite überein.
Multi Top To Bottom ermöglicht mehrere Spalten und verwendet den verfügbaren vertikalen Raum, bevor zur nächsten Spalte gewechselt wird, und bietet vertikales Scrollen.
Multi von links nach rechts ermöglicht auch mehrere Spalten, verwendet jedoch ein anderes Layout und einen anderen horizontalen Bildlaufmodus (siehe Abbildung unten).
Im Allgemeinen können Sie das OnClick-Ereignis für jedes Steuerelement in der Steuerelementliste verwenden. Das Steuerelement unterstützt High-DPI-Optionen und VCL-Stile und ist vollständig für Live-Bindungen aktiviert.
Die neue TControlListButton-Komponente
Wir können TSpeedButton nicht direkt im Bedienfeld verwenden, da das Steuerelement keine speziellen Interaktionen wie den Status einer geänderten Schaltfläche verarbeitet. Für Steuerelemente, die unterschiedliche Status haben können, haben wir eine spezielle TControlListControl-Klasse hinzugefügt (von TGraphicControl geerbt). Sie können neue Steuerelemente erstellen, die von der TControlListControl-Klasse erben, und Mausereignisse für ihre Elemente verwenden. Dies ist der Ansatz von TControlListButton – das Analogon eines TSpeedButton, der mit TControlList verwendet werden kann. Diese Schaltfläche hat 3 Stile – Druckknopf, Werkzeugknopf und Link.
Neue VCL TNumberBox-Steuerung
Das neue VCL TNumberBox-Steuerelement ist ein modern aussehendes numerisches Eingabesteuerelement, das dem WinUI NumberBox-Steuerelement der Windows-Plattform nachempfunden ist. Das Steuerelement unterstützt die Eingabe von Ganzzahlen, Gleitkommazahlen mit einem bestimmten Satz von Dezimalstellen und korrekter Formatierung sowie Währungswerten:
Der Benutzer kann den Wert mithilfe von Pfeiltasten erhöhen oder verringern oder Tasten oder Mausrad zulassen sowie mithilfe der Bild-auf- und Bild-ab-Tasten um einen großen Wert erhöhen und verringern. Die Komponente enthält eine optionale Drehschaltfläche (konfiguriert mit der Eigenschaft SpinButtonOptions Placement), die kompakt, inline oder deaktiviert sein kann, wie hier gezeigt:
Die Komponente unterstützt auch die einfache Auswertung von Ausdrücken. Wenn diese Option aktiviert ist, kann ein Benutzer einen Ausdruck wie 40 + 2 eingeben, und das Steuerelement ersetzt ihn durch das Ergebnis. Das Steuerelement unterstützt die Berechnung von Inline-Berechnungen von Grundgleichungen wie Multiplikation, Division, Addition und Subtraktion (ermöglicht die Verwendung von Klammern). Beachten Sie, dass Sie die Symbole + und – sowohl als binäre als auch als unäre Operationen verwenden können. Sie können also -23 oder + 23 eingeben, 55 + 23 und 55-23 schreiben und sie sogar wie in 53 ++ 23 oder 53 kombinieren –23, was als 53 – (-23) bewertet wird. addiert also die beiden Werte.
Bleiben Sie dran
Das ist alles für jetzt. Besuchen Sie das Vorschau-Webinar von morgen und laden Sie (einmal veröffentlicht) die Testversion herunter, um mit diesen neuen VCL-Steuerelementen zu experimentieren.
Entre las nuevas características del próximo lanzamiento de Delphi, C ++ Builder y RAD Studio 10.4.2 Embarcadero incluirá dos nuevos controles VCL, un control de lista virtualizado y un cuadro de entrada numérica.
Advertencia: esta publicación de blog cubre un producto inédito, que está sujeto a cambios hasta GA.
El lanzamiento de RAD Studio 10.4.2 se acerca y puede unirse a nuestro seminario web de vista previa mañana, consulte https://blogs.embarcadero.com/whats-coming-in-10-4-2-sydney/ . Esta es alguna información previa al lanzamiento (o blogs beta) sobre un área específica, nuevos controles de VCL.
Nuevo control VCL TControlList
Embarcadero está introduciendo en la biblioteca VCL un nuevo control de lista virtualizado y flexible. La idea detrás de este control es ofrecer un nuevo control VCL de aspecto moderno que ofrece una configuración de interfaz de usuario personalizada y un control de alto rendimiento, que se puede utilizar con listas muy largas. Esta lista representa una única lista de selección y todos los elementos tienen visualmente la misma altura y ancho.
El nuevo control permite al desarrollador definir el contenido diseñando uno de los elementos de la lista utilizando controles gráficos (es decir, descendientes de TGraphicControl) y proporcionar datos al control para mostrar elementos individuales, sin crear todos los controles para todos los elementos. elementos de la lista, pero solo los necesarios para mostrar los datos. Al ser totalmente virtual, la lista puede manejar miles e incluso millones de elementos, ofreciendo un desplazamiento extremadamente rápido. Además de calcular y mostrar solo los elementos que caben en la pantalla, la lista almacena en caché el contenido de los elementos mediante mapas de bits en memoria.
El nuevo control se asemeja al control clásico TDBCtrlGrid: hay un panel para los controles, se le ponen controles y se crean elementos virtuales en tiempo de ejecución. A diferencia de DBCtrlGrid, solo podemos poner TGraphicControl y todos los elementos son virtuales. A continuación, puede ver el control en tiempo de diseño (con la superficie de un solo elemento disponible para editar) y en tiempo de ejecución (con el mismo contenido multiplicado muchas veces).
Esta lista no incluye una colección de elementos con información específica. Los datos se pueden proporcionar a través de enlaces en vivo (incluido el enlace a un conjunto de datos o una colección de objetos) o mediante un evento para consultar los datos de un elemento individual (de modo que el almacenamiento directo y el mapeo dependan completamente del desarrollador) . Para que cada elemento muestre el control llama a un controlador de eventos que puede usar para personalizar la apariencia de cada elemento, en este caso simplemente modificando el título de la etiqueta:
Con el diseño anterior, 10,000 elementos y múltiples columnas, este código trivial produce un resultado como el siguiente:
En el momento del diseño, hay un diálogo especial con una colección de configuraciones preestablecidas, que incluyen ajustes para las propiedades de TControlList y colecciones de control con propiedades específicas. Utilice las flechas en la parte superior para elegir la configuración principal y puede ajustarla con algunas de las otras opciones de casilla de verificación en la parte inferior. El asistente anula la configuración de la lista de control.
El elemento que diseña se replica (virtualmente) para cada uno de los elementos solicitados con la propiedad ItemCount. La superficie visible del control generalmente permite varios elementos, todos con el mismo ancho y alto. El control tiene 3 diseños diferentes:
Único para una sola columna de elementos, en cuyo caso el ancho del elemento coincidirá con el ancho del control.
Multi Top To Bottom permite múltiples columnas y utilizará el espacio vertical disponible antes de pasar a la siguiente columna, ofreciendo desplazamiento vertical.
Multi Left To Right también permite múltiples columnas, pero usa un diseño diferente y un modo de desplazamiento horizontal (vea la imagen a continuación).
En términos generales, puede utilizar el evento OnClick para cualquier control de la lista de controles. El control admite opciones de DPI alto y estilos VCL y está totalmente habilitado para Live Bindings.
El nuevo componente TControlListButton
No podemos usar TSpeedButton directamente en el panel, porque el control no maneja interacciones especiales como el cambio de estado del botón. Para los controles, que pueden tener diferentes estados, agregamos una clase especial TControlListControl (heredada de TGraphicControl). Puede crear nuevos controles que hereden de la clase TControlListControl y puede usar eventos de mouse para sus elementos. Este es el enfoque utilizado por TControlListButton, el análogo de un TSpeedButton que se puede usar con TControlList. Este botón tiene 3 estilos: botón pulsador, botón de herramienta y enlace.
Nuevo control VCL TNumberBox
El nuevo control VCL TNumberBox es un control de entrada numérico de aspecto moderno modelado a partir del control WinUI NumberBox de la plataforma Windows. El control admite la entrada de números enteros, números de punto flotante con un conjunto dado de dígitos decimales y el formato adecuado, y valores de moneda:
El usuario puede aumentar o disminuir el valor usando los botones de flecha o las teclas permitidas o la rueda del mouse, y también aumentar y disminuir un valor grande usando las teclas Page Up y Page Down. El componente incluye un botón giratorio opcional (configurado con la propiedad de ubicación SpinButtonOptions), que puede ser compacto, integrado o deshabilitado, como se muestra aquí respectivamente:
El componente también admite la evaluación de expresiones simples; si está habilitado, un usuario puede ingresar una expresión como 40 + 2 y el control la reemplazará con el resultado. El control admite cálculos en línea de ecuaciones básicas como multiplicación, división, suma y resta (lo que permite el uso de paréntesis). Tenga en cuenta que puede usar los símbolos + y – tanto como operaciones binarias como unarias, por lo que puede escribir -23 o + 23, puede escribir 55 + 23 y 55-23, e incluso combinarlos como en 53 ++ 23 o 53 –23, que se evalúa como 53 – (-23). así suma los dos valores.
Manténganse al tanto
Eso es todo por ahora. Sintonice el seminario web de vista previa de mañana y (una vez lanzado) descargue la versión de prueba para experimentar con estos nuevos controles VCL.
Entre os novos recursos do próximo lançamento do Delphi, C ++ Builder e RAD Studio 10.4.2, o Embarcadero incluirá dois novos controles VCL, um controle de lista virtualizado e uma caixa de entrada numérica.
Aviso: esta postagem do blog cobre um produto não lançado, que está sujeito a alterações até o GA.
O lançamento do RAD Studio 10.4.2 está se aproximando e você pode participar de nosso seminário on-line de visualização amanhã, consulte https://blogs.embarcadero.com/whats-coming-in-10-4-2-sydney/ . Estas são algumas informações de pré-lançamento (ou blog beta) em uma área específica, novos controles VCL.
Novo controle VCL TControlList
A Embarcadero está introduzindo na biblioteca VCL um novo controle de lista flexível e virtualizado. A ideia por trás desse controle é oferecer um novo controle VCL de aparência moderna, oferecendo configuração de UI personalizada e um controle de alto desempenho, que pode ser usado com listas muito longas. Esta lista representa uma única lista de seleção e todos os itens têm visualmente a mesma altura e largura.
O novo controle permite que o desenvolvedor defina o conteúdo projetando um dos elementos da lista usando controles gráficos (ou seja, descendentes do TGraphicControl) e forneça dados ao controle para exibir elementos individuais, sem criar todos os controles para todos os elementos na lista, mas apenas aqueles necessários para exibir os dados. Por ser totalmente virtual, a lista pode lidar com milhares e até milhões de itens, oferecendo uma rolagem extremamente rápida. Além de calcular e exibir apenas os itens que cabem na tela, a lista armazena em cache o conteúdo dos itens usando bitmaps na memória.
O novo controle lembra o controle TDBCtrlGrid clássico – há um painel para controles, você coloca controles nele e tem itens virtuais criados em tempo de execução. Ao contrário do DBCtrlGrid, podemos colocar apenas TGraphicControl nele e todos os itens são virtuais. Abaixo você pode ver o controle em tempo de design (com a superfície de um único item disponível para edição) e em tempo de execução (com o mesmo conteúdo multiplicado várias vezes).
Esta lista não inclui uma coleção de itens com informações específicas. Os dados podem ser fornecidos por meio de vínculos dinâmicos (incluindo vínculo a um conjunto de dados ou coleção de objetos) ou por meio de um evento para consultar os dados de um item individual (de modo que o armazenamento direto e o mapeamento sejam totalmente de responsabilidade do desenvolvedor) . Para cada item a ser exibido, o controle chama um manipulador de eventos que você pode usar para personalizar a aparência de cada item, neste caso apenas modificando a legenda do rótulo:
Com o design anterior, 10.000 itens e várias colunas, esse código trivial produz uma saída como a abaixo:
Em tempo de design, há um diálogo especial com uma coleção de configurações predefinidas, que incluem ajuste para propriedades TControlList e coleções de controle com propriedades específicas. Use as setas na parte superior para escolher a configuração do núcleo e pode ajustá-la com algumas das outras opções da caixa de seleção na parte inferior. O assistente substitui a configuração da lista de controle.
O item que você projeta é replicado (virtualmente) para cada um dos itens solicitados com a propriedade ItemCount. A superfície visível do controle geralmente permite vários itens, todos com a mesma largura e altura. O controle possui 3 layouts diferentes:
Único para uma única coluna de itens; nesse caso, a largura do item corresponderá à largura do controle.
Multi Top to Bottom permite várias colunas e usará o espaço vertical disponível antes de passar para a próxima coluna, oferecendo rolagem vertical.
Multi Left To Right também permite colunas múltiplas, mas usa um layout diferente e modo de rolagem horizontal (veja a imagem abaixo).
Em termos gerais, você pode usar o evento OnClick para qualquer controle na lista de controle. O controle suporta opções de alto DPI e estilos VCL e é totalmente habilitado para Live Bindings.
O novo componente TControlListButton
Não podemos usar TSpeedButton diretamente no painel, porque o controle não lida com interações especiais como o estado alterado do botão. Para controles, que podem ter diferentes estados, adicionamos uma classe TControlListControl especial (herdada de TGraphicControl). Você pode criar novos controles que herdam da classe TControlListControl e pode usar eventos de mouse para seus itens. Esta é a abordagem usada por TControlListButton – o análogo de um TSpeedButton que pode ser usado com TControlList. Este botão tem 3 estilos – botão de ação, botão de ferramenta e link.
Novo controle VCL TNumberBox
O novo controle VCL TNumberBox é um controle de entrada numérica de aparência moderna modelado após o controle WinUI NumberBox da plataforma Windows. O controle suporta a entrada de números inteiros, números de ponto flutuante com um determinado conjunto de dígitos decimais e formatação adequada e valores monetários:
O usuário pode aumentar ou diminuir o valor usando os botões de seta ou permitir as teclas ou a roda do mouse, e também aumentar e diminuir em um valor grande usando as teclas Page Up e Page Down. O componente inclui um botão giratório opcional (configurado com a propriedade SpinButtonOptions Placement), que pode ser compacto, embutido ou desativado, conforme mostrado aqui, respectivamente:
O componente também suporta avaliação de expressão simples; se habilitado, o usuário pode inserir uma expressão como 40 + 2 e o controle irá substituí-la pelo resultado. O controle suporta cálculos em linha de equações básicas, como multiplicação, divisão, adição e subtração (permitindo o uso de parênteses). Observe que você pode usar os símbolos + e – como operações binárias e unárias, então você pode digitar -23 ou + 23, você pode escrever 55 + 23 e 55-23, e até mesmo combiná-los como em 53 ++ 23 ou 53 –23, que é avaliado como 53 – (-23). assim, adiciona os dois valores.
Fique atento
É tudo por agora. Sintonize no webinar de visualização de amanhã e (uma vez lançado) baixe a versão de teste para experimentar esses novos controles VCL.
Среди новых функций в следующем выпуске Delphi, C ++ Builder и RAD Studio 10.4.2 Embarcadero будет включать два совершенно новых элемента управления VCL, виртуализированный элемент управления списком и числовое поле ввода.
Предупреждение: это сообщение в блоге касается неизданного продукта, который может быть изменен до GA.
Релиз RAD Studio 10.4.2 приближается, и вы можете присоединиться к нашему предварительному веб-семинару завтра, см. Https://blogs.embarcadero.com/whats-coming-in-10-4-2-sydney/ . Это некоторая предварительная информация (или ведение бета-блога) в одной конкретной области, новые элементы управления VCL.
Новый элемент управления VCL TControlList
Embarcadero представляет в библиотеке VCL новый гибкий и виртуализированный элемент управления списком. Идея этого элемента управления состоит в том, чтобы предложить новый современный элемент управления VCL, предлагающий настраиваемую конфигурацию пользовательского интерфейса и элемент управления высокой производительности, который можно использовать с очень длинными списками. Этот список представляет собой единый список выбора, и все элементы визуально имеют одинаковую высоту и ширину.
Новый элемент управления позволяет разработчику определять содержимое, создавая один из элементов списка с использованием графических элементов управления (то есть потомков TGraphicControl) и предоставлять данные элементу управления для отображения отдельных элементов, не создавая все элементы управления для всех элементы в списке, но только те, которые необходимы для отображения данных. Будучи полностью виртуальным, список может обрабатывать тысячи и даже миллионы элементов, предлагая чрезвычайно быструю прокрутку. Помимо вычисления и отображения только тех элементов, которые умещаются на экране, список кэширует содержимое элементов, используя точечные рисунки в памяти.
Новый элемент управления напоминает классический элемент управления TDBCtrlGrid — есть панель для элементов управления, вы помещаете на нее элементы управления и создаете виртуальные элементы во время выполнения. В отличие от DBCtrlGrid мы можем поместить на него только TGraphicControl, и все элементы являются виртуальными. Ниже вы можете увидеть элемент управления во время разработки (с поверхностью одного элемента, доступной для редактирования) и во время выполнения (с одним и тем же содержанием, умноженным во много раз).
В этот список не входит набор элементов с конкретной информацией. Данные могут быть предоставлены либо через живые привязки (включая привязку к набору данных или коллекции объектов), либо через событие для запроса данных отдельного элемента (так что прямое хранение и сопоставление будут полностью зависеть от разработчика) . Для каждого элемента для отображения элемент управления вызывает обработчик событий, который можно использовать для настройки внешнего вида каждого элемента, в данном случае просто изменяя заголовок метки:
С предыдущим дизайном, 10000 элементов и нескольких столбцов этот тривиальный код дает результат, как показано ниже:
Во время разработки есть специальный диалог с набором предустановленных конфигураций, которые включают настройку свойств TControlList и коллекции элементов управления с определенными свойствами. Вы используете стрелки вверху, чтобы выбрать конфигурацию ядра, и вы можете точно настроить ее с помощью некоторых других опций флажка внизу. Мастер отменяет настройку списка управления.
Создаваемый вами элемент реплицируется (виртуально) для каждого из элементов, запрошенных с помощью свойства ItemCount. На видимой поверхности элемента управления обычно можно разместить несколько элементов одинаковой ширины и высоты. У элемента управления есть 3 разных макета:
Single для отдельных столбцов элементов, и в этом случае ширина элемента будет соответствовать ширине элемента управления.
Функция Multi Top To Bottom позволяет использовать несколько столбцов и будет использовать доступное вертикальное пространство перед переходом к следующему столбцу, предлагая вертикальную прокрутку.
Multi Left To Right также позволяет использовать несколько столбцов, но использует другой макет и режим горизонтальной прокрутки (см. Изображение ниже).
В общих чертах вы можете использовать событие OnClick для любого элемента управления в списке элементов управления. Элемент управления поддерживает параметры High-DPI и стили VCL, а также полностью поддерживает Live Bindings.
Новый компонент TControlListButton
Мы не можем использовать TSpeedButton непосредственно на панели, потому что элемент управления не обрабатывает специальные взаимодействия, такие как изменение состояния кнопки. Для элементов управления, которые могут иметь разные состояния, мы добавили специальный класс TControlListControl (унаследованный от TGraphicControl). Вы можете создавать новые элементы управления, которые наследуются от класса TControlListControl и могут использовать события мыши для своих элементов. Это подход, используемый TControlListButton — аналогом TSpeedButton, который можно использовать с TControlList. У этой кнопки есть 3 стиля — кнопка, кнопка инструмента и ссылка.
Новый элемент управления VCL TNumberBox
Новый элемент управления VCL TNumberBox — это современный элемент управления числовым вводом, смоделированный по образцу элемента управления WinUI NumberBox платформы Windows. Элемент управления поддерживает ввод целых чисел, чисел с плавающей запятой с заданным набором десятичных цифр и правильного форматирования, а также значений валют:
Пользователь может увеличивать или уменьшать значение с помощью кнопок со стрелками или разрешающих клавиш или колеса мыши, а также увеличивать и уменьшать на большое значение с помощью клавиш Page Up и Page Down. Компонент включает дополнительную кнопку прокрутки (настроенную с помощью свойства SpinButtonOptions Placement), которая может быть компактной, встроенной или отключенной, как показано здесь соответственно:
Компонент также поддерживает простую оценку выражений; если этот параметр включен, пользователь может ввести выражение типа 40 + 2, и элемент управления заменит его результатом. Элемент управления поддерживает встроенные вычисления основных уравнений, таких как умножение, деление, сложение и вычитание (с возможностью использования круглых скобок). Обратите внимание, что вы можете использовать символы + и — как двоичные, так и унарные операции, поэтому вы можете ввести -23 или + 23, вы можете написать 55 + 23 и 55-23 и даже объединить их, как в 53 ++ 23 или 53 –23, что оценивается как 53 — (-23). таким образом складывает два значения.
Будьте на связи
На этом пока все. Настройтесь на завтрашний предварительный веб-семинар и (после его выпуска) загрузите пробную версию, чтобы поэкспериментировать с этими новыми элементами управления VCL.
Free Color Picker ist ein Programm, mit dem Sie die Farbe aller auf dem Bildschirm angezeigten Pixel erfassen können. Es wurde in Delphi entwickelt. Wie vom Entwickler angegeben,„Free Color Picker ist eine Open Source-Anwendung. Um eine Farbe aufzunehmen, bewegen Sie den Cursor einfach an die gewünschte Position und drücken Sie die Taste F4. Die Farbe unter dem Cursor wird der Farbpalette im rechten Teil des Hauptfensters hinzugefügt. Um die Farbaufnahme zu erleichtern, befindet sich in der Mitte des Hauptfensters eine Bildschirmlupe, die ein vergrößertes Bild um die aktuelle Cursorposition anzeigt. Die maximale Vergrößerung beträgt 30x. Jede aufgenommene Farbe kann mit dem eingebauten Farbeditor frei geändert werden. Darüber hinaus ermöglicht das Programm die einfache Änderung der gesamten Farbpalette, das Sortieren und Filtern von Farben nach verschiedenen Kriterien, das Erfassen von Farben aus im Programm geladenen Grafikdateien und das Generieren von Zufallsfarben für benutzerdefinierte Bereiche von RGB-Kanalwerten und Bereiche von HSL-Komponenten Werte,Suche nach Triadenfarben in einem Farbkreis und vielem mehr. “
Free Color Picker es un programa que te permite capturar el color de cualquier píxel mostrado en la pantalla y está desarrollado en Delphi. Según lo declarado por el desarrollador,“Free Color Picker es una aplicación de código abierto. Para capturar un color, simplemente mueva el cursor a la posición deseada y presione la tecla F4. El color debajo del cursor se agregará a la paleta de colores ubicada en la parte derecha de la ventana principal. Para facilitar la captura de color, hay una lupa de pantalla en el centro de la ventana principal que muestra una imagen ampliada alrededor de la posición actual del cursor. El aumento máximo es 30x. Cualquier color capturado se puede cambiar libremente utilizando el editor de color integrado. Además, el programa permite una fácil modificación de toda la paleta de colores, ordenando y filtrando colores de acuerdo con varios criterios, capturando colores de archivos gráficos cargados en el programa, generando colores aleatorios para rangos de valores de canal RGB especificados por el usuario y rangos de componente HSL. valores,buscando colores de tríada en una rueda de colores y muchos más “.
Free Color Picker é um programa que permite capturar a cor de qualquer pixel exibido na tela e é desenvolvido em Delphi. Conforme declarado pelo desenvolvedor,“O Free Color Picker é um aplicativo de código aberto. Para capturar uma cor, basta mover o cursor até a posição desejada e pressionar a tecla F4. A cor abaixo do cursor será adicionada à paleta de cores localizada à direita da janela principal. Para facilitar a captura de cores, há uma lupa no centro da janela principal que exibe uma imagem ampliada em torno da posição atual do cursor. A ampliação máxima é 30x. Qualquer cor capturada pode ser alterada livremente usando o editor de cores embutido. Além disso, o programa permite a modificação fácil de toda a paleta de cores, classificando e filtrando cores de acordo com vários critérios, capturando cores de arquivos gráficos carregados no programa, gerando cores aleatórias para intervalos especificados pelo usuário de valores de canal RGB e intervalos de componente HSL valores,procurando cores da tríade em uma roda de cores e muito mais. ”
Free Color Picker — это программа, которая позволяет захватывать цвет любого пикселя, отображаемого на экране, и разработана на Delphi. По заявлению разработчика,«Free Color Picker — это приложение с открытым исходным кодом. Чтобы запечатлеть цвет, просто переместите курсор в желаемое положение и нажмите клавишу F4. Цвет под курсором будет добавлен в цветовую палитру, расположенную в правой части главного окна. Чтобы облегчить захват цвета, в центре главного окна есть экранная лупа, которая отображает увеличенное изображение вокруг текущей позиции курсора. Максимальное увеличение — 30x. Любой захваченный цвет можно свободно изменять с помощью встроенного редактора цветов. Кроме того, программа позволяет легко изменять всю цветовую палитру, сортировать и фильтровать цвета в соответствии с различными критериями, захватывать цвета из графических файлов, загруженных в программу, генерировать случайные цвета для заданных пользователем диапазонов значений канала RGB и диапазонов компонента HSL. значения,поиск триадных цветов на цветовом круге и многое другое ».
「Thesi Ev Sphereには、Industry 4.0プロジェクト用のモジュールが組み込まれており、国内で最高クラスのソリューションです。Thesi Ev Sphereは、企業のビジネスプロセスの統合と革新、情報と経営管理の集中化を推進するべく開発されました。Thesi Ev Sphereのアーキテクチャには、企業の規模によって多様化するさまざまなソリューションが組み込まれています。ソリューションは動的であり、静的ではありません。それは、企業が『Pay per use』のポリシーに則り、成長に合わせて必要なモジュールを随時購入することで、長い時系列の中でThesi EvSの構成をカスタマイズしていくことできるからです。ThesiEvS ERPを使用すれば、運用プロセスの機能統合を促進することで、同じ数のリソースであっても効率性を高め、企業内の調整コストを削減できます。」
Das Delphi / RAD Studio-Ökosystem ist auf viele Komponentenpartner angewiesen, um die unterschiedlichen Anforderungen von Entwicklern zu erfüllen. Der Komponentenmarkt besteht seit über 20 Jahren und blüht weiter. Viele Partner zeichnen sich durch hervorragende Produktportfolios aus, mit denen Entwickler professionelle Apps schneller bereitstellen können. Noch wichtiger ist, dass viele auf dem neuesten Stand der Innovation sind und dabei helfen, das zu erreichen, was mit Delphi und RAD Studio erreicht werden kann.
Wir arbeiten gerne eng mit unseren vielen Technologiepartnern zusammen. Unternehmen wie DevExpress, TMS Software und DelphiStyles tragen maßgeblich dazu bei, Entwicklern die Tools zur Verfügung zu stellen, die sie für den Erfolg benötigen. Viele teilen eine reiche Geschichte der Zusammenarbeit mit Embarcadero, und wir lieben es, von ihnen zu lernen. Ich habe kürzlich mit Ray Navasarkian von DevExpress gesprochen und dachte, es würde Spaß machen, einige seiner Perspektiven zu teilen. Wir planen, diese Diskussion in eine längere Reihe aufzunehmen.
Was ist Ihre Vision für DevExpress?
The word “vision” may sound trite, so I think it is best to consider our guiding principles. Number one is that we conduct business ethically and with absolute integrity. We would be nothing without our customers. As such, we owe them the truth. When we are able, we promise and deliver to the best of our abilities. When we cannot, we let them know that we are simply unable. We do not always get this right and we definitely make mistakes, but our objective is simple—to engage our customers in a fair and honest manner, each and every day.
Das zweite Leitprinzip besteht darin, außergewöhnliche Produkte zu liefern, die die Erwartungen erfüllen und übertreffen. Wie das erste Leitprinzip ist dies nicht einfach umzusetzen, aber ich denke, die Qualität unserer VCL-Produktlinie spricht für unseren Gesamterfolg in dieser Hinsicht. Wir haben DevExpress 1998 gestartet, weil wir Delphi lieben und die Möglichkeit gesehen haben, Innovationen im Bereich der VCL-Komponenten vorzunehmen. Wir haben gesehen, dass der VCL-Komponentenmarkt eine „Outlook-inspirierte“ Datenrasterkomponente benötigt. Datenraster sind wichtige Elemente der Benutzeroberfläche in den meisten Desktop-Apps, und die Überarbeitung der Benutzeroberfläche von Microsoft Office 97 gab uns die Möglichkeit, mit einem Knall in den VCL-Komponentenmarkt einzusteigen. Der Rest ist, wie sie sagen, Geschichte.
Seit diesen halcyon Tagen sind mehr als 20 Jahre vergangen. Wir haben einige großartige Produkte auf den Markt gebracht und hatten unseren Anteil an unseren glanzlosen Veröffentlichungen, aber im Großen und Ganzen bin ich stolz auf das, was wir auf dem VCL-Markt erreicht haben. Ich denke, wir bieten unseren treuen Kunden einen robusten Satz von UI-Komponenten, die effektiv eine breite Palette von Nutzungsszenarien abdecken.
Dank der großartigen Beziehung zu Embarcadero und dem hervorragenden Feedback unserer treuen Kunden erwarte ich, in den nächsten 20 Jahren noch viel mehr zu liefern. Es lebe RAD Studio.
Was ist heute das Hauptaugenmerk von DevExpress?
DevExpress verwaltet ein umfangreiches Produktportfolio, das über die UI-Komponenten für RAD Studio hinausgeht. Es ist zwar nicht immer einfach, aber wir tun unser Bestes, um an mehreren Fronten Innovationen zu entwickeln und die Anforderungen der Entwickler auf mehreren Entwicklungsplattformen zu erfüllen.
Wenn es um RAD Studio geht, werden neue Produkte und Funktionen von der Nachfrage der Benutzer und den Marktanforderungen geprägt. Wir konzentrieren uns aufgrund unserer großen Entwickler-Community weiterhin voll und ganz auf VCL – eine Community, die sich weiterhin sowohl für RAD Studio als auch für unsere VCL-Produktlinie engagiert.
Unsere größte Herausforderung ist heute die Verbreitung neuer Entwicklungsplattformen. Es ist nicht immer einfach, die Erwartungen zu erfüllen und zu übertreffen, wenn Sie neue Plattformen neben älteren Plattformen jonglieren müssen. Betrachten Sie unsere VCL-Produktlinie. Wir liefern über 200 UI-Steuerelemente und Bibliotheken. Im Laufe der Jahre haben eine Handvoll Benutzer darum gebeten, dass wir unsere UI-Steuerelemente auf FMX portieren. Obwohl ich das gerne getan hätte, sind unsere Ressourcen begrenzt. Daher mussten wir die schwere Entscheidung treffen, auf die FMX-Entwicklung zu verzichten und unsere Energie auf die VCL zu konzentrieren.
Wir haben letztes Jahr ein FMX-Datenraster veröffentlicht. Obwohl wir uns letztendlich entschieden haben, die FMX-Entwicklung zu diesem Zeitpunkt einzustellen, bleiben wir offen für seine Möglichkeiten. Wenn unsere Kunden FMX anstelle der VCL wählen, werden wir die Ressourcen nach Bedarf neu zuweisen. In der Zwischenzeit können wir unser FMX-Netz unserer Community kostenlos zur Verfügung stellen.
Was halten Sie von vollständigen Komponentenbibliotheken im Vergleich zu erstklassigen Komponenten? Wir haben einige eigene in JavaScript, und wir sehen dort, dass Best-in-Class eine stärkere Formel zu sein scheint.
Ich würde argumentieren, dass sich die zugrunde liegenden Komponentenanforderungen für Desktop-Entwickler von denen von Web-Entwicklern unterscheiden. Als wir unser erstes VCL-Produkt herausbrachten, mussten wir uns schnell mit anderen wichtigen Elementen der Benutzeroberfläche wie einem Menüband, einem Kalender usw. auseinandersetzen. Ein Grund dafür war das Erscheinungsbild und das allgemeine Erscheinungsbild. Unsere Kunden wollten keine UI-Elemente verschiedener Anbieter in einer einzigen Desktop-App kombinieren. Auch Webentwickler möchten nicht unbedingt mischen und anpassen, aber ich glaube, sie sind eher bereit, in erstklassige Produkte zu investieren, als in eine einzelne monolithische Komponentenbibliothek.
Anders ausgedrückt, ich denke, dass eine einzelne erstklassige Komponente im JavaScript-Bereich überleben kann. Meine Erfahrung im Laufe der Jahre zeigt mir, dass es im Desktop-Bereich viel schwieriger ist, dasselbe zu tun. Ich könnte mich natürlich irren, aber ich erinnere mich an einen Anbieter im Microsoft-Komponenten-Ökosystem, der nicht mehr im Geschäft ist, weil er seine erstklassige UI-Komponente nicht mit zusätzlichen UI-Steuerelementen unterstützt. Als die Wettbewerber die branchenweit besten Funktionen dieses Anbieters einholten, schwand sein Marktanteil recht schnell.
Natürlich können bestimmte Komponentenbibliotheken unabhängiger in eine Desktop-App integriert werden. Dies umfasst Diagramme, Dokumentenverwaltung und Berichterstellung. Unsere Produktlinie ist ein perfektes Beispiel. Wie Sie wissen, bieten wir keine Diagramme oder Berichte für die VCL an. Es ist nicht zu sagen, dass Benutzer nicht fragen – wir werden routinemäßig gebeten, eine Diagramm- und Berichtsbibliothek für RAD Studio bereitzustellen.
Während das Mischen und Anpassen in der Webentwicklung wahrscheinlicher ist, scheint mir die Fähigkeit, mit einem einzigen Anbieter zusammenzuarbeiten – einem Anbieter, der seine Versprechen einhält – das Ideal zu sein. Das Mischen und Anpassen von UI-Tools kann die Produktivität beeinträchtigen, die Wartungskosten erhöhen und natürlich die Upgrade-Pfade beeinflussen.
UX ist sehr wichtig für moderne Anwendungen. Eine der Hürden, die einige in unserer Community erleben, insbesondere in der mobilen Entwicklung, ist, dass die Qualität der Benutzeroberfläche erheblich variieren kann. Was denken Sie über die Zukunft der UX-Entwicklung in RAD Studio?
Wir sind sehr stolz auf unsere bisherigen Erfolge, aber wir haben noch viel Arbeit für die Entwickler von VCL und RAD Studio zu erledigen. UX-Standards entwickeln sich weiter, und wir müssen das Gleiche tun. Es ist nicht immer einfach, aber unsere enge Beziehung zu Embarcadero sollte uns helfen, die UX-Anforderungen unserer gemeinsamen Kunden für die kommenden Jahre zu erfüllen.
Wie Sie wissen, veranstaltete Embarcadero kürzlich einen Desktop Summit, auf dem wir unseren POV zum UI-Design vorstellten. Ich freue mich auf weitere diesbezügliche Möglichkeiten. Ich denke, jeder in der RAD Studio-Entwicklergemeinde profitiert davon, wenn Komponentenanbieter ihre Perspektiven zum UI-Design frei teilen.
Welche Auswirkungen hat niedriger Code Ihrer Meinung nach auf den Komponentenbereich?
Ich bin zuversichtlich, dass im Low-Code-Bereich Chancen bestehen. Letztendlich bestimmt der Marktplatz, was wir tun und wie wir es tun. Wenn niedriger Code allgegenwärtig wird, werden wir uns entsprechend anpassen.
Derzeit bleibt die native Anwendungsentwicklung bei DevExpress oberste Priorität. Wie Sie wissen, haben wir im Dezember ein umfangreiches Update unserer VCL-Produktlinie veröffentlicht. Diese Version enthielt ein neues VCL-Gantt-Steuerelement und Aktualisierungen unseres Datenrasters, unserer Tabelle und unseres PDF-Viewers für die VCL. DirectX spielt auch eine wichtige Rolle in unserer Desktop-Entwicklungsstrategie. Hoffentlich können wir in einem zukünftigen Interview diskutieren, warum wir uns von GDI zu DirectX bewegen. Schauen Sie sich eine vollständige Zusammenfassung der wichtigsten Funktionen an, die wir Ende letzten Jahres ausgeliefert haben .
Vielen Dank für die Gelegenheit, DevExpress mit der Embarcadero-Entwicklergemeinde zu diskutieren.
El ecosistema Delphi / RAD Studio se basa en muchos socios de componentes para respaldar las diversas necesidades de los desarrolladores. El mercado de componentes existe desde hace más de 20 años y sigue prosperando. Muchos socios se destacan por sus excelentes carteras de productos que ayudan a los desarrolladores a implementar aplicaciones profesionales más rápido. Aún más importante, muchos están a la vanguardia de la innovación, lo que ayuda a avanzar en lo que se puede lograr con Delphi y RAD Studio.
Disfrutamos trabajando en estrecha colaboración con nuestros numerosos socios tecnológicos. Empresas como DevExpress, TMS Software y DelphiStyles son fundamentales para brindarles a los desarrolladores las herramientas que necesitan para tener éxito. Muchos comparten una rica historia de colaboración con Embarcadero y nos encanta aprender de ellos. Recientemente hablé con Ray Navasarkian de DevExpress y pensé que sería divertido compartir algunas de sus perspectivas. Planeamos hacer de esta discusión parte de una serie más larga.
¿Cuál es su visión para DevExpress?
The word “vision” may sound trite, so I think it is best to consider our guiding principles. Number one is that we conduct business ethically and with absolute integrity. We would be nothing without our customers. As such, we owe them the truth. When we are able, we promise and deliver to the best of our abilities. When we cannot, we let them know that we are simply unable. We do not always get this right and we definitely make mistakes, but our objective is simple—to engage our customers in a fair and honest manner, each and every day.
El segundo principio rector es ofrecer productos excepcionales que cumplan y superen las expectativas. Como el primer principio rector, esto no es fácil de lograr, pero creo que la calidad de nuestra línea de productos VCL habla de nuestro éxito general en este sentido. Comenzamos DevExpress en 1998 porque amamos a Delphi y vimos la oportunidad de innovar en el espacio de componentes de VCL. Vimos que el mercado de componentes de VCL necesitaba un componente de cuadrícula de datos “inspirado en Outlook”. Las cuadrículas de datos son elementos clave de la interfaz de usuario en la mayoría de las aplicaciones de escritorio, y la revisión de la interfaz de usuario de Office 97 de Microsoft nos dio la oportunidad de ingresar al mercado de componentes VCL con una explosión. El resto, como ellos dicen, es historia.
Han pasado más de 20 años desde aquellos días felices. Lanzamos algunos productos excelentes y tuvimos nuestra parte de nuestros lanzamientos mediocres, pero en general, estoy orgulloso de lo que hemos logrado en el mercado de VCL. Creo que ofrecemos a nuestros clientes leales un conjunto sólido de componentes de interfaz de usuario que abordan de manera efectiva una amplia gama de escenarios de uso.
Gracias a la excelente relación con Embarcadero y los excelentes comentarios de nuestros leales clientes, espero ofrecer mucho más en los próximos 20 años. Larga vida a RAD Studio.
¿Cuál es el enfoque principal de DevExpress hoy?
DevExpress administra una cartera de productos masiva que se extiende más allá de los componentes de la interfaz de usuario para RAD Studio. Si bien no siempre es fácil, hacemos todo lo posible para innovar en múltiples frentes y abordar las necesidades de los desarrolladores en múltiples plataformas de desarrollo.
Cuando se trata de RAD Studio, los nuevos productos y características están determinados por la demanda de los usuarios y los requisitos del mercado. Seguimos totalmente enfocados en VCL debido a nuestra gran comunidad de desarrolladores, una comunidad que sigue comprometida con RAD Studio y nuestra línea de productos VCL.
Nuestro mayor desafío hoy es la proliferación de nuevas plataformas de desarrollo. No siempre es fácil cumplir y superar las expectativas cuando debe hacer malabarismos con nuevas plataformas con plataformas heredadas. Considere nuestra línea de productos VCL. Enviamos más de 200 bibliotecas y controles de IU. A lo largo de los años, algunos usuarios nos pidieron que transfiriéramos nuestros controles de interfaz de usuario a FMX. Aunque me hubiera encantado hacerlo, nuestros recursos son finitos. Como tal, tuvimos que tomar la difícil decisión de renunciar al desarrollo de FMX y concentrar nuestras energías en el VCL.
Lanzamos una cuadrícula de datos FMX el año pasado. Si bien finalmente decidimos detener el desarrollo de FMX en este momento, mantenemos la mente abierta a sus posibilidades. Si nuestros clientes eligen FMX en lugar de VCL, reasignaremos los recursos según sea necesario. Mientras tanto, podemos poner nuestra red FMX a disposición de nuestra comunidad de forma gratuita.
¿Cuál es su opinión sobre las bibliotecas completas de componentes frente a los mejores componentes de su clase? Tenemos algunos propios en JavaScript, y vemos que lo mejor de su clase parece ser una fórmula más sólida.
Yo diría que los requisitos de los componentes subyacentes para los desarrolladores de escritorio difieren de los de los desarrolladores web. Cuando lanzamos nuestro primer producto VCL, tuvimos que hacer un seguimiento rápido con otros elementos importantes de la interfaz de usuario, como una cinta, un calendario, etc. Parte de la razón de esta apariencia involucrada y apariencia general. Nuestros clientes no querían mezclar y combinar elementos de la interfaz de usuario de diferentes proveedores en una sola aplicación de escritorio. Si bien los desarrolladores web tampoco necesariamente quieren mezclar y combinar, creo que están más dispuestos a invertir en los mejores productos de su clase que en una sola biblioteca de componentes monolíticos.
Para decirlo de otra manera, creo que un único componente, el mejor de su clase, puede sobrevivir en el espacio de JavaScript. Mi experiencia a lo largo de los años me dice que es mucho más difícil hacer lo mismo en el espacio del escritorio. Podría estar equivocado, por supuesto, pero recuerdo a un proveedor en el ecosistema de componentes de Microsoft que ya no está en el negocio porque no admitió su mejor componente de interfaz de usuario con controles de interfaz de usuario auxiliares. A medida que los competidores se pusieron al día con el mejor conjunto de características de su clase de este proveedor, su participación de mercado se redujo con bastante rapidez.
Por supuesto, ciertas bibliotecas de componentes se pueden integrar de forma más independiente dentro de una aplicación de escritorio. Esto incluye gráficos, gestión de documentos e informes. Nuestra línea de productos es un ejemplo perfecto. Como sabe, no ofrecemos gráficos ni informes para el VCL. No quiere decir que los usuarios no pregunten; habitualmente se nos pide que entreguemos una biblioteca de informes y gráficos para RAD Studio.
Si bien la combinación y el emparejamiento son más probables en el desarrollo web, la capacidad de trabajar con un solo proveedor, un proveedor que cumple sus promesas, me parece lo ideal. Combinar y combinar herramientas de IU puede afectar la productividad, puede aumentar los costos de mantenimiento y, por supuesto, afectará las rutas de actualización.
UX es muy importante para las aplicaciones modernas. Uno de los obstáculos que experimentan algunos miembros de nuestra comunidad, especialmente en el desarrollo móvil, es que la calidad de la interfaz de usuario puede variar significativamente. ¿Cuáles son sus pensamientos sobre el futuro del desarrollo de UX en RAD Studio?
Estamos muy orgullosos de nuestros logros pasados, pero todavía tenemos mucho trabajo por hacer para los desarrolladores de VCL y RAD Studio. Los estándares de UX evolucionan y nosotros debemos hacer lo mismo. No siempre es fácil, pero nuestra estrecha relación con Embarcadero debería ayudarnos a abordar las necesidades de UX de nuestros clientes mutuos en los próximos años.
Como saben, Embarcadero organizó recientemente una Desktop Summit donde presentamos nuestro punto de vista sobre el diseño de la interfaz de usuario. Espero nuevas oportunidades a este respecto. Creo que todos en la comunidad de desarrolladores de RAD Studio se benefician cuando los proveedores de componentes comparten libremente sus perspectivas sobre el diseño de la interfaz de usuario.
¿Cuál cree que es el impacto del código bajo en el espacio de los componentes?
Estoy seguro de que existen oportunidades en el espacio de código bajo. Al final del día, el mercado dicta lo que hacemos y cómo lo hacemos. Si el código bajo se vuelve omnipresente, nos adaptaremos en consecuencia.
Por ahora, el desarrollo de aplicaciones nativas sigue siendo la máxima prioridad en DevExpress. Como saben, en diciembre lanzamos una actualización importante de nuestra línea de productos VCL. Esta versión incluyó un nuevo control VCL Gantt y actualizaciones de nuestra cuadrícula de datos, hoja de cálculo y visor de PDF para VCL. DirectX también juega un papel importante en nuestra estrategia de desarrollo de escritorio. Con suerte, podemos discutir por qué nos estamos alejando de GDI y hacia DirectX en una entrevista futura. Consulte un resumen completo de las principales características que enviamos a fines del año pasado .
Gracias por la oportunidad de hablar sobre DevExpress con la comunidad de desarrolladores de Embarcadero.
O ecossistema Delphi / RAD Studio depende de muitos parceiros de componentes para dar suporte às diversas necessidades dos desenvolvedores. O mercado de componentes existe há mais de 20 anos e continua a prosperar. Muitos parceiros se destacam por seus excelentes portfólios de produtos que ajudam os desenvolvedores a implantar aplicativos profissionais com mais rapidez. Ainda mais importante, muitos estão na vanguarda da inovação, ajudando a avançar o que pode ser alcançado com Delphi e RAD Studio.
Gostamos de trabalhar em estreita colaboração com nossos muitos parceiros de tecnologia. Empresas como DevExpress, TMS Software e DelphiStyles são fundamentais para oferecer aos desenvolvedores as ferramentas de que precisam para ter sucesso. Muitos compartilham ricas histórias de colaboração com a Embarcadero e adoramos aprender com eles. Falei recentemente com Ray Navasarkian da DevExpress e pensei que seria divertido compartilhar algumas de suas perspectivas. Pretendemos tornar esta discussão parte de uma série mais longa.
Qual é a sua visão para DevExpress?
A palavra “visão” pode parecer banal, então acho melhor considerar nossos princípios orientadores. Em primeiro lugar, conduzimos os negócios de forma ética e com integridade absoluta. Não seríamos nada sem nossos clientes. Como tal, devemos a eles a verdade. Quando podemos, prometemos e entregamos o melhor de nossas habilidades. Quando não podemos, fazemos com que saibam que simplesmente somos incapazes. Nem sempre acertamos e definitivamente cometemos erros, mas nosso objetivo é simples: envolver nossos clientes de maneira justa e honesta, todos os dias.
O segundo princípio orientador é fornecer produtos excepcionais que atendam e superem as expectativas. Como o primeiro princípio orientador, isso não é fácil de realizar, mas acho que a qualidade de nossa linha de produtos VCL fala de nosso sucesso geral a esse respeito. Começamos o DevExpress em 1998 porque amamos a Delphi e vimos uma oportunidade de inovar no espaço de componentes VCL. Vimos que o mercado de componentes VCL precisava de um componente de grade de dados “inspirado no Outlook”. As grades de dados são elementos-chave da IU na maioria dos aplicativos de desktop, e a revisão da IU do Office 97 da Microsoft nos deu a oportunidade de entrar no mercado de componentes VCL com força. O resto, como dizem, é história.
Já se passaram mais de 20 anos desde aqueles dias felizes. Lançamos alguns produtos excelentes e tivemos nossa parcela de lançamentos sem brilho, mas, de modo geral, estou orgulhoso do que conquistamos no mercado VCL. Acho que oferecemos aos nossos clientes leais um conjunto robusto de componentes de IU que atendem com eficácia a uma ampla gama de cenários de uso.
Graças ao ótimo relacionamento com a Embarcadero e ao excelente feedback de nossos clientes fiéis, espero entregar muito mais nos próximos 20 anos. Viva o RAD Studio.
Qual é o foco principal do DevExpress hoje?
DevExpress gerencia um grande portfólio de produtos que vai além dos componentes de UI para RAD Studio. Embora nem sempre seja fácil, fazemos o nosso melhor para inovar em várias frentes e atender às necessidades do desenvolvedor em várias plataformas de desenvolvimento.
Quando se trata do RAD Studio, novos produtos e recursos são moldados pela demanda do usuário e pelos requisitos do mercado. Continuamos totalmente focados em VCL por causa de nossa grande comunidade de desenvolvedores – uma comunidade que permanece comprometida com o RAD Studio e nossa linha de produtos VCL.
Nosso maior desafio hoje é a proliferação de novas plataformas de desenvolvimento. Nem sempre é fácil atender e superar as expectativas quando você deve conciliar novas plataformas com plataformas legadas. Considere nossa linha de produtos VCL. Nós enviamos mais de 200 controles de interface do usuário e bibliotecas. Ao longo dos anos, um punhado de usuários pediu que transferíssemos nossos controles de IU para o FMX. Embora eu adorasse fazer isso, nossos recursos são finitos. Como tal, tivemos que tomar a difícil decisão de abandonar o desenvolvimento do FMX e concentrar nossas energias no VCL.
Lançamos uma grade de dados FMX no ano passado. Embora finalmente tenhamos optado por interromper o desenvolvimento do FMX neste momento, continuamos com a mente aberta para suas possibilidades. Se nossos clientes escolherem o FMX em vez do VCL, então iremos realocar os recursos conforme necessário. Nesse ínterim, podemos disponibilizar nossa grade FMX para nossa comunidade gratuitamente.
Qual é a sua opinião sobre bibliotecas completas de componentes versus os melhores componentes da classe? Temos alguns em JavaScript e vemos que o melhor da classe parece ser uma fórmula mais forte.
Eu diria que os requisitos de componentes subjacentes para desenvolvedores de desktop diferem daqueles dos desenvolvedores da web. Quando lançamos nosso primeiro produto VCL, tivemos que acompanhar rapidamente outros elementos importantes da IU, como Ribbon, Calendário, etc. Parte da razão para essa aparência envolvida e aparência geral. Nossos clientes não queriam misturar e combinar elementos de interface do usuário de diferentes fornecedores em um único aplicativo de desktop. Embora os desenvolvedores da web também não queiram necessariamente misturar e combinar, acredito que eles estão mais dispostos a investir nos melhores produtos da categoria do que em uma única biblioteca de componentes monolíticos.
Em outras palavras, acho que um único componente best-in-class pode sobreviver no espaço do JavaScript. Minha experiência ao longo dos anos me diz que é muito mais difícil fazer o mesmo na área de trabalho. Eu posso estar errado, é claro, mas me lembro de um fornecedor no ecossistema de componentes da Microsoft que não está mais no mercado porque deixou de oferecer suporte a seu melhor componente de interface do usuário com controles auxiliares da interface do usuário. À medida que os concorrentes alcançaram o conjunto de recursos best-in-class deste fornecedor, sua participação de mercado diminuiu rapidamente.
Claro, certas bibliotecas de componentes podem ser integradas de forma mais independente em um aplicativo de desktop. Isso inclui gráficos, gerenciamento de documentos e relatórios. Nossa linha de produtos é um exemplo perfeito. Como você sabe, não oferecemos gráficos ou relatórios para o VCL. Isso não quer dizer que os usuários não perguntem – somos rotineiramente solicitados a entregar uma biblioteca de gráficos e relatórios para o RAD Studio.
Embora misturar e combinar seja mais provável no desenvolvimento web, a capacidade de trabalhar com um único fornecedor – um fornecedor que cumpre suas promessas – me parece o ideal. Misturar e combinar ferramentas de IU pode afetar a produtividade, aumentar os custos de manutenção e, claro, afetar os caminhos de atualização.
UX é muito importante para aplicações modernas. Um dos obstáculos que alguns em nossa comunidade enfrentam, especialmente no desenvolvimento móvel, é que a qualidade da IU pode variar significativamente. O que você acha do futuro do desenvolvimento de UX no RAD Studio?
Estamos muito orgulhosos de nossas conquistas anteriores, mas ainda temos muito trabalho a fazer para os desenvolvedores de VCL e RAD Studio. Os padrões de UX evoluem e devemos fazer o mesmo. Nem sempre é fácil, mas nosso relacionamento próximo com a Embarcadero deve nos ajudar a atender às necessidades de experiência do usuário de nossos clientes em comum nos próximos anos.
Como você sabe, a Embarcadero hospedou recentemente um Desktop Summit, onde apresentamos nosso POV sobre design de IU. Estou ansioso para oportunidades adicionais a esse respeito. Acho que todos na comunidade de desenvolvedores do RAD Studio se beneficiam quando os fornecedores de componentes compartilham livremente suas perspectivas sobre o design da IU.
Qual você acha que é o impacto do baixo código no espaço de componentes?
Estou confiante de que existem oportunidades no espaço de baixo código. No final do dia, o mercado dita o que fazemos e como o fazemos. Se o código baixo se tornar onipresente, nos adaptaremos de acordo.
Por enquanto, o desenvolvimento de aplicativos nativos continua sendo a principal prioridade no DevExpress. Como você sabe, lançamos uma grande atualização em nossa linha de produtos VCL em dezembro. Este lançamento incluiu um novo controle VCL Gantt e atualizações em nossa grade de dados, planilha e visualizador de PDF para a VCL. O DirectX também desempenha um papel importante em nossa estratégia de desenvolvimento de desktop. Felizmente, podemos discutir por que estamos mudando de GDI e em direção ao DirectX em uma entrevista futura. Confira um resumo completo dos principais recursos que enviamos no ano passado .
Obrigado pela oportunidade de discutir DevExpress com a comunidade de desenvolvedores da Embarcadero.
Экосистема Delphi / RAD Studio полагается на множество партнеров по компонентам для поддержки разнообразных потребностей разработчиков. Рынок компонентов существует уже более 20 лет и продолжает процветать. Многие партнеры выделяются своим превосходным портфелем продуктов, которые помогают разработчикам быстрее развертывать профессиональные приложения. Что еще более важно, многие находятся на переднем крае инноваций, помогая продвигать то, что может быть достигнуто с помощью Delphi и RAD Studio.
Нам нравится тесно сотрудничать с нашими многочисленными партнерами по технологиям. Такие компании, как DevExpress, TMS Software и DelphiStyles, играют важную роль в предоставлении разработчикам инструментов, необходимых для достижения успеха. Многие из них имеют богатую историю сотрудничества с Embarcadero, и мы любим учиться у них. Недавно я разговаривал с Рэем Навасаркяном из DevExpress и подумал, что было бы весело поделиться некоторыми его взглядами. Мы планируем сделать это обсуждение частью более длинной серии.
Каково ваше видение DevExpress?
Слово «видение» может показаться банальным, поэтому я считаю, что лучше всего принять во внимание наши руководящие принципы. Во-первых, мы ведем бизнес этично и честно. Без наших клиентов мы были бы ничем. Таким образом, мы в долгу перед ними. Когда у нас есть возможность, мы обещаем и делаем все, что в наших силах. Когда мы не можем, мы даем им понять, что просто не можем. Мы не всегда делаем это правильно и определенно делаем ошибки, но наша цель проста — каждый день честно и честно привлекать наших клиентов.
Второй руководящий принцип — создавать исключительные продукты, которые соответствуют ожиданиям и превосходят их. Подобно первому руководящему принципу, это нелегко осуществить, но я думаю, что качество нашей линейки продуктов VCL говорит о нашем общем успехе в этом отношении. Мы основали DevExpress в 1998 году, потому что нам нравится Delphi и мы увидели возможность вводить новшества в области компонентов VCL. Мы увидели, что рынок компонентов VCL нуждался в компоненте сетки данных «в стиле Outlook». Сетки данных являются ключевыми элементами пользовательского интерфейса в большинстве настольных приложений, и капитальный ремонт пользовательского интерфейса Microsoft Office 97 дал нам возможность быстро выйти на рынок компонентов VCL. Остальное, как говорится, уже история.
С тех безмятежных дней прошло 20 с лишним лет. Мы выпустили несколько отличных продуктов и получили свою долю неудачных выпусков, но в целом я горжусь тем, чего мы достигли на рынке VCL. Я думаю, что мы предлагаем нашим постоянным клиентам надежный набор компонентов пользовательского интерфейса, которые эффективно подходят для широкого спектра сценариев использования.
Благодаря отличным отношениям с Embarcadero и отличным отзывам наших постоянных клиентов, я ожидаю, что в ближайшие 20 лет мы добьемся большего. Да здравствует RAD Studio.
Что сегодня в центре внимания DevExpress?
DevExpress управляет обширным портфелем продуктов, который выходит за рамки компонентов пользовательского интерфейса для RAD Studio. Хотя это не всегда легко, мы делаем все возможное, чтобы внедрять инновации на нескольких фронтах и удовлетворять потребности разработчиков на нескольких платформах разработки.
Что касается RAD Studio, то новые продукты и функции формируются в соответствии с требованиями пользователей и требованиями рынка. Мы по-прежнему полностью сосредоточены на VCL из-за нашего большого сообщества разработчиков — сообщества, которое остается приверженным как RAD Studio, так и нашей линейке продуктов VCL.
Сегодня наша самая большая проблема — это распространение новых платформ для разработки. Не всегда легко оправдать ожидания и превзойти их, когда приходится совмещать новые платформы с устаревшими. Рассмотрим нашу линейку продуктов VCL. Мы поставляем более 200 элементов управления и библиотек пользовательского интерфейса. За прошедшие годы несколько пользователей попросили нас перенести наши элементы управления пользовательского интерфейса в FMX. Хотя мне бы очень хотелось это сделать, наши ресурсы ограничены. Таким образом, нам пришлось принять трудное решение отказаться от разработки FMX и сосредоточить наши усилия на VCL.
В прошлом году мы выпустили сетку данных FMX. Хотя мы в конечном итоге решили прекратить разработку FMX в настоящее время, мы по-прежнему открыты для его возможностей. Если наши клиенты предпочтут FMX, а не VCL, мы перераспределим ресурсы по мере необходимости. Тем временем мы можем сделать нашу сетку FMX доступной для нашего сообщества бесплатно.
Что вы думаете о полных библиотеках компонентов по сравнению с лучшими в своем классе компонентами? У нас есть несколько собственных на JavaScript, и мы видим, что лучший в своем классе вариант является более сильной формулой.
Я бы сказал, что базовые требования к компонентам для разработчиков настольных компьютеров отличаются от требований веб-разработчиков. Когда мы выпустили наш первый продукт VCL, нам пришлось быстро дополнить другие основные элементы пользовательского интерфейса, такие как лента, календарь и т. Д. Отчасти это связано с внешним видом и общим внешним видом. Наши клиенты не хотели смешивать и сопоставлять элементы пользовательского интерфейса от разных поставщиков в одном настольном приложении. Хотя веб-разработчики также не обязательно хотят смешивать и сопоставлять, я считаю, что они более охотно инвестируют в лучшие в своем классе продукты, а не в одну монолитную библиотеку компонентов.
Другими словами, я действительно думаю, что единственный лучший в своем классе компонент может выжить в пространстве JavaScript. Мой многолетний опыт подсказывает мне, что сделать то же самое на рабочем столе гораздо сложнее. Конечно, я могу ошибаться, но я помню поставщика экосистемы компонентов Microsoft, который больше не занимается бизнесом, потому что он не смог поддержать свой лучший в своем классе компонент пользовательского интерфейса с помощью дополнительных элементов управления пользовательского интерфейса. По мере того, как конкуренты догнали лучший в своем классе набор функций этого поставщика, его доля на рынке довольно быстро уменьшилась.
Конечно, некоторые библиотеки компонентов можно интегрировать более независимо в настольное приложение. Это включает в себя диаграммы, управление документами и отчетность. Наша продуктовая линейка является прекрасным примером. Как вы знаете, мы не предлагаем диаграммы или отчеты для VCL. Нельзя сказать, что пользователи не спрашивают — нас обычно просят предоставить библиотеку диаграмм и отчетов для RAD Studio.
Хотя смешивание и сопоставление более вероятно в веб-разработке, возможность работать с одним поставщиком — поставщиком, который выполняет свои обещания — кажется мне идеальным. Смешивание и согласование инструментов пользовательского интерфейса может повлиять на производительность, увеличить затраты на обслуживание и, конечно же, повлиять на пути обновления.
UX очень важен для современных приложений. Одно из препятствий, с которым сталкиваются некоторые участники нашего сообщества, особенно при разработке мобильных приложений, заключается в том, что качество пользовательского интерфейса может значительно различаться. Что вы думаете о будущем UX-разработки в RAD Studio?
Мы очень гордимся нашими прошлыми достижениями, но нам предстоит еще много работы для разработчиков VCL и RAD Studio. Стандарты UX развиваются, и мы должны делать то же самое. Это не всегда легко, но наши тесные отношения с Embarcadero должны помочь нам удовлетворить потребности наших общих клиентов в UX на долгие годы.
Как вы знаете, Embarcadero недавно провела Desktop Summit, на котором мы представили нашу точку зрения на дизайн пользовательского интерфейса. Надеюсь на дополнительные возможности в этом отношении. Я думаю, что всем в сообществе разработчиков RAD Studio выгодно, когда поставщики компонентов свободно делятся своими взглядами на дизайн пользовательского интерфейса.
Как вы думаете, как влияет низкий код на пространство компонентов?
Я уверен, что в области low-code есть возможности. В конце концов, рынок диктует, что мы делаем и как мы это делаем. Если низкий код станет повсеместным, мы соответствующим образом адаптируемся.
На данный момент разработка собственных приложений остается главным приоритетом DevExpress. Как вы знаете, в декабре мы выпустили крупное обновление нашей линейки продуктов VCL. Этот выпуск включал новый элемент управления VCL Gantt и обновления нашей таблицы данных, электронных таблиц и средства просмотра PDF для VCL. DirectX также играет важную роль в нашей стратегии разработки настольных компьютеров. Надеюсь, мы сможем обсудить, почему мы уходим от GDI к DirectX, в одном из будущих интервью. Ознакомьтесь с полным обзором основных функций, которыемы выпустили в конце прошлого года .
Спасибо за возможность обсудить DevExpress с сообществом разработчиков Embarcadero.
uniGUI Web Application Framework ist ein vollwertiges Tool zur Entwicklung von Webanwendungen für Embarcadero Delphi. uniGUI wird durch die fortschrittliche, browserübergreifende JavaScript-Bibliothek Sencha Ext JS unterstützt. uniGUI ermöglicht es Entwicklern, visuell überzeugende Webanwendungen direkt in der Delphi IDE zu entwickeln.
uniGUI Web Application Framework es una herramienta de desarrollo de aplicaciones web de pila completa para Embarcadero Delphi. uniGUI está habilitado por la biblioteca avanzada de JavaScript entre navegadores Sencha Ext JS. uniGUI permite a los desarrolladores desarrollar aplicaciones web visualmente atractivas directamente en Delphi IDE.
O uniGUI Web Application Framework é uma ferramenta de desenvolvimento de aplicativos da web full-stack para Embarcadero Delphi. O uniGUI é habilitado pela biblioteca JavaScript cross-browser avançada Sencha Ext JS. O uniGUI permite que os desenvolvedores desenvolvam aplicativos da web visualmente atraentes diretamente no Delphi IDE.
uniGUI Web Application Framework — это полнофункциональный инструмент разработки веб-приложений для Embarcadero Delphi. uniGUI поддерживается расширенной кросс-браузерной библиотекой JavaScript Sencha Ext JS. uniGUI позволяет разработчикам разрабатывать привлекательные веб-приложения непосредственно в среде Delphi IDE.
Темы: uniGUI, Sencha Ext JS, Базы данных, мобильный интерфейс, Linux, HyperServer, масштабируемость, стабильность, удаленное развертывание
In der RAD Studio-IDE ist ein Paketmanager namens GetIt integriert, mit dem Sie Pakete durchsuchen, herunterladen, kaufen und installieren können. Pakete bieten Bibliotheken, Komponenten, IDE-Erweiterungen, SDKs, Stile, Beispiele und mehr. Im Paketmanager verfügbare Pakete können auf der Embarcadero GetIt- Site durchsucht und in der IDE oder über eine Befehlszeile installiert werden. Darüber hinaus ist die neueste Liste der neuen Pakete, die dem GetIt Package Manager hinzugefügt wurden, per RSS-Feed verfügbar .
GetIt kann Kunden dabei helfen, wertvolle Bibliotheken zu entdecken und sie einfach in der IDE zu installieren. Außerdem soll es die Migration von einer Version von RAD Studio zur nächsten vereinfachen und sicherstellen, dass diese Bibliotheken bei der Veröffentlichung verfügbar sind, damit ein vorhandenes Projekt problemlos migriert werden kann nach einem schnellen Download der benötigten Bibliotheken.
Sie können Pakete von GetIt über die folgende Befehlszeile installieren: GetItCmd
GetIt Package Manager - Version 7.0
Copyright c 2019 Embarcadero Technologies, Inc. All Rights Reserved.
Usage: GetItCmd []:
-install or -i
Install Item[s] separated with ';'
-uninstall or -u
Uninstall Item[s] separated with ';'
-user_name:
User name for proxies with required authentication.
-password:
Password for proxies with required authentication.
-accept_eulas
The user accepts EULA[s] of downloaded package[s].
-verb:[Quiet/Minimal/Normal/Detailed]
Specifies the verbose level for console output messages.
-listavailable:[Filter by substring]
List all avilable packages from package source.
Options:
-filter:[All/Free/Acquired/Installed]. Default[Installed].
-sort:[Name/Vendor]. Default[Name].
-r Custom registry subkey for saving.
Im Dezember 2020 wurde Embarcadero GetIt eine Reihe neuer oder aktualisierter Bibliotheken hinzugefügt.
nrComm Lib bietet Tools zum Ausführen der seriellen Kommunikationsaufgaben und zum Gerätezugriff. Es hat fertige Lösungen für: serielle rs232 lpt-Ports, USB, Bluetooth, Hid, Modbus, GSM, SMS und andere.29 Jan 2021 Testversion
Die Sempare Template Engine für Delphi ermöglicht eine flexible Textbearbeitung. Es kann zum Generieren von E-Mail, HTML, Quellcode, XML, Konfiguration usw. verwendet werden. 28 Jan 2021 GPL v3.0 oder Sempare Commercial
TMS VCL WebGMaps ist ein Komponentensatz mit umfassender Konfigurierbarkeit für die Integration von Google Maps in Delphi und C ++ Builder. Es stehen verschiedene Kartenmodi zur Verfügung und zusätzliche Karteninformationen können angezeigt werden.20 Jan 2021 Commercial
Erstellen Sie modern aussehende und funktionsreiche Windows-Anwendungen schneller mit weit über 600 Komponenten in einem kostengünstigen Paket für Delphi & C ++ Builder20 Jan 2021 Commercial
TMS VCL Cloud Pack ist eine Delphi- und C ++ Builder-Komponentenbibliothek, mit der alle wichtigen Cloud-Dienste wie Facebook, Twitter, OneDrive, Google Drive, iCloud, Amazon Cloud-Laufwerk, LinkedIn, Paypal, Trello, Youtube und viele mehr nahtlos genutzt werden können Jan 2021 Werbung
DB-fähige und nicht DB-fähige funktionsreiche Diagrammkomponenten für geschäftliche, statistische, finanzielle und wissenschaftliche Daten.20 Jan 2021 Commercial
Verleihen Sie Ihren Apps ultimative Flexibilität und Leistung mit nativem Pascal- oder Basic-Scripting und vollständiger IDE mit visuellem Formulardesigner, Objektinspektor und mehr.20 Jan 2021 Commercial
Satz von Komponenten für die echte native iOS-Anwendungsentwicklung. Keine Kompromisse: 100% iOS-Leistung, 100% iOS-Look, 100% iOS-Feeling-Komponenten.18 Jan 2021 Commercial
Verwenden Sie einen UI-Steuerungssatz, um die Anwendungsentwicklung in VCL, FMX und LCL zu beherrschen. Beinhaltet Raster, Planer, Baumansicht, Richeditor, Symbolleiste,… 18 Jan 2021 Commercial
Vollständig plattformübergreifende Diagrammkomponente für die VCL-, FMX- und LCL-Entwicklung für geschäftliche, statistische, finanzielle und wissenschaftliche Daten.18 Jan 2021 Commercial
Voll plattformübergreifendes Komponentenset aus einer Quelle für die Entwicklung von Desktops und mobilen Anwendungen für Windows, Mac OS-X, iOS und Android18 Jan 2021 Commercial
TMS FMX Cloud Pack ist eine Delphi- und C ++ Builder-Komponentenbibliothek, mit der alle wichtigen Cloud-Dienste wie Facebook, Twitter, OneDrive, Google Drive, Amazon Cloud Drive, LinkedIn, Paypal, Trello, Youtube und viele mehr nahtlos genutzt werden können. 18. Januar 2021 Kommerziell
Leistungsstarke, umfangreiche und flexible Komponentensuite für native Excel-Berichte sowie zur Generierung und Bearbeitung von Dateien für VCL & FMX.18 Jan 2021 Commercial
Kommunikationspaket, das den Zugriff auf die seriellen Schnittstellen unter Windows ermöglicht. Die ereignisgesteuerte Architektur bietet die höchstmögliche Leistung und ermöglicht die Ausführung aller Tools im Hintergrund.18 Jan 2021 Commercial
Quellcode-Profiler zur Messung der Laufzeit von 64- und 32-Bit-Anwendungen, die mit Delphi entwickelt wurden. Engpässe können mit Hilfe eines komfortablen Betrachters leicht gefunden werden.12 Jan 2021 Freeware
El IDE de RAD Studio tiene un administrador de paquetes integrado llamado GetIt que le permite navegar, descargar, comprar e instalar paquetes. Los paquetes proporcionan bibliotecas, componentes, extensiones IDE, SDK, estilos, muestras y más. Los paquetes disponibles en el administrador de paquetes se pueden buscar en el sitio de Embarcadero GetIt e instalar en el IDE o mediante una línea de comando. Además, la lista más reciente de paquetes nuevos agregados al Administrador de paquetes GetIt está disponible a través de RSS .
GetIt puede ayudar a los clientes a descubrir bibliotecas valiosas e instalarlas fácilmente en el IDE, además está destinado a ayudar a simplificar la migración de una versión de RAD Studio a la siguiente, asegurándose de que esas bibliotecas estén disponibles en el momento del lanzamiento para que un proyecto existente se pueda migrar fácilmente. después de una descarga rápida de las bibliotecas necesarias.
Puede instalar paquetes desde GetIt a través de la línea de comando: GetItCmd
GetIt Package Manager - Version 7.0
Copyright c 2019 Embarcadero Technologies, Inc. All Rights Reserved.
Usage: GetItCmd []:
-install or -i
Install Item[s] separated with ';'
-uninstall or -u
Uninstall Item[s] separated with ';'
-user_name:
User name for proxies with required authentication.
-password:
Password for proxies with required authentication.
-accept_eulas
The user accepts EULA[s] of downloaded package[s].
-verb:[Quiet/Minimal/Normal/Detailed]
Specifies the verbose level for console output messages.
-listavailable:[Filter by substring]
List all avilable packages from package source.
Options:
-filter:[All/Free/Acquired/Installed]. Default[Installed].
-sort:[Name/Vendor]. Default[Name].
-r Custom registry subkey for saving.
Se agregaron varias bibliotecas nuevas o actualizadas a Embarcadero GetIt en diciembre de 2020. ¡Eche un vistazo!
nrComm Lib proporciona herramientas para realizar las tareas de comunicaciones en serie y el acceso al dispositivo. Tiene soluciones listas para: puertos serie rs232 lpt, usb, bluetooth, hid, modbus, gsm, sms y otros. 29 de enero de 2021 de prueba
Sempare Template Engine para Delphi permite una manipulación de texto flexible. Se puede utilizar para generar correo electrónico, html, código fuente, xml, configuración, etc. 28 de enero de 2021 GPL v3.0 o Sempare Commercial
TMS VCL WebGMaps es un conjunto de componentes con amplia capacidad de configuración para integrar Google Maps en Delphi y C ++ Builder. Hay diferentes modos de mapa disponibles y se puede mostrar información adicional del mapa. 20 de enero de 2021 Comercial
Cree aplicaciones de Windows de aspecto moderno y con muchas funciones más rápido con más de 600 componentes en un paquete que ahorra dinero y tiempo para Delphi & C ++ Builder 20 de enero de 2021 Comercial
TMS VCL Cloud Pack es una biblioteca de componentes de Delphi y C ++ Builder para utilizar sin problemas todos los principales servicios en la nube como Facebook, Twitter, OneDrive, Google Drive, iCloud, Amazon cloud drive, LinkedIn, Paypal, Trello, Youtube y muchos más… 20 Enero de 2021 Comercial
Componentes de gráficos ricos en funciones compatibles con DB y sin DB para datos comerciales, estadísticos, financieros y científicos. 20 de enero de 2021 Comercial
Agregue la máxima flexibilidad y potencia a sus aplicaciones con secuencias de comandos nativas Pascal o Basic e IDE completo con diseñador de formularios visuales, inspector de objetos y más. 20 de enero de 2021 Comercial
Conjunto de componentes para el verdadero desarrollo de aplicaciones nativas de macOS. Sin compromisos: ¡una apariencia 100% macOS! 18 de enero de 2021 Comercial
Conjunto de componentes para el verdadero desarrollo de aplicaciones nativas de iOS. Sin concesiones: rendimiento 100% iOS, apariencia 100% iOS, componentes 100% sensación iOS. 18 de enero de 2021 Comercial
Utilice un conjunto de controles de interfaz de usuario para dominar el desarrollo de aplicaciones en VCL, FMX y LCL. Incluye cuadrícula, planificador, vista de árbol, richeditor, barra de herramientas,… 18 de enero de 2021 Comercial
Componente gráfico totalmente multiplataforma para el desarrollo de VCL, FMX y LCL diseñado para datos comerciales, estadísticos, financieros y científicos. 18 de enero de 2021
TMS FNC Blox ofrece componentes de diagramación / diagrama de flujo multiplataforma y marco cruzado para Windows, iOS, macOS, Android, Linux, Raspbian. 18 de enero de 2021 Comercial
Conjunto de componentes totalmente multiplataforma de fuente única para el desarrollo de aplicaciones móviles y de escritorio para Windows, Mac OS-X, iOS y Android 18 de enero de 2021 Comercial
TMS FMX Cloud Pack es una biblioteca de componentes de Delphi y C ++ Builder para utilizar sin problemas todos los principales servicios en la nube como Facebook, Twitter, OneDrive, Google Drive, Amazon cloud drive, LinkedIn, Paypal, Trello, Youtube y muchos más … 18 de enero de 2021 Comercial
Conjunto de componentes potente, extenso y flexible para informes nativos de Excel y generación y manipulación de archivos para VCL y FMX. 18 de enero de 2021 Comercial
Paquete de comunicaciones que proporciona acceso a los puertos serie en Windows. La arquitectura basada en eventos proporciona el mayor rendimiento posible y permite que todas las herramientas se ejecuten en segundo plano. 18 de enero de 2021 Comercial
Generador de perfiles de código fuente para medir el tiempo de ejecución de aplicaciones de 64 y 32 bits desarrolladas con Delphi. Los cuellos de botella se encuentran fácilmente con la ayuda de un visor cómodo.12 de enero de 2021 Freeware
O RAD Studio IDE possui um gerenciador de pacotes chamado GetIt integrado, que permite navegar, baixar, comprar e instalar pacotes. Os pacotes fornecem bibliotecas, componentes, extensões IDE, SDKs, estilos, amostras e muito mais. Os pacotes disponíveis no gerenciador de pacotes podem ser navegados no site Embarcadero GetIt e instalados no IDE ou via linha de comando. Além disso, a lista mais recente de novos pacotes adicionados ao Gerenciador de Pacotes GetIt está disponível via RSS feed .
GetIt pode ajudar os clientes a descobrir bibliotecas valiosas e instalá-las facilmente no IDE, além de ajudar a simplificar a migração de uma versão do RAD Studio para a próxima, garantindo que essas bibliotecas estejam disponíveis no lançamento para que um projeto existente possa ser facilmente migrado após um download rápido das bibliotecas necessárias.
Você pode instalar pacotes do GetIt por meio da linha de comando: GetItCmd
GetIt Package Manager - Version 7.0
Copyright c 2019 Embarcadero Technologies, Inc. All Rights Reserved.
Usage: GetItCmd []:
-install or -i
Install Item[s] separated with ';'
-uninstall or -u
Uninstall Item[s] separated with ';'
-user_name:
User name for proxies with required authentication.
-password:
Password for proxies with required authentication.
-accept_eulas
The user accepts EULA[s] of downloaded package[s].
-verb:[Quiet/Minimal/Normal/Detailed]
Specifies the verbose level for console output messages.
-listavailable:[Filter by substring]
List all avilable packages from package source.
Options:
-filter:[All/Free/Acquired/Installed]. Default[Installed].
-sort:[Name/Vendor]. Default[Name].
-r Custom registry subkey for saving.
Uma série de bibliotecas novas ou atualizadas foram adicionadas ao Embarcadero GetIt em dezembro de 2020. Dê uma olhada!
nrComm Lib fornece ferramentas para realizar as tarefas de comunicação serial e acesso ao dispositivo. Possui soluções prontas para: portas seriais rs232 lpt, usb, bluetooth, hid, modbus, gsm, sms e outros.29 Jan 2021 Trial
O Sempare Template Engine para Delphi permite uma manipulação de texto flexível. Pode ser usado para gerar e-mail, html, código-fonte, xml, configuração, etc.28 Jan 2021 GPL v3.0 ou Sempare Commercial
TMS VCL WebGMaps é um conjunto de componentes com extensa configurabilidade para integração do Google Maps em Delphi e C ++ Builder. Diferentes modos de mapa estão disponíveis e informações extras do mapa podem ser exibidas.20 Janeiro de 2021 Comercial
Crie aplicativos Windows com aparência moderna e rica em recursos com mais rapidez, com bem mais de 600 componentes em um pacote que economiza tempo e dinheiro para Delphi e C ++ Builder20 de janeiro de 2021 Comercial
TMS VCL Cloud Pack é uma biblioteca de componentes Delphi e C ++ Builder para usar perfeitamente todos os principais serviços em nuvem, como Facebook, Twitter, OneDrive, Google Drive, iCloud, unidade de nuvem Amazon, LinkedIn, Paypal, Trello, Youtube e muitos mais … 20 Comercial de janeiro de 2021
Componentes gráficos ricos em recursos com e sem conhecimento de DB para dados comerciais, estatísticos, financeiros e científicos.20 Janeiro de 2021 Comercial
Adicione o máximo de flexibilidade e poder aos seus aplicativos com scripts nativos Pascal ou Basic e IDE completo com designer de forma visual, inspetor de objetos e muito mais. 20 de janeiro de 2021 Comercial
Conjunto de componentes para o verdadeiro desenvolvimento de aplicativos macOS nativos. Sem compromissos: aparência e comportamento 100% macOS! 18 de janeiro de 2021 Comercial
Conjunto de componentes para o verdadeiro desenvolvimento de aplicativos iOS nativos. Sem compromissos: 100% de desempenho iOS, 100% iOS olhar, 100% iOS sentir components.18 janeiro 2021 Commercial
Use um conjunto de controle de IU para dominar o desenvolvimento de aplicativos em VCL, FMX e LCL. Inclui grade, planejador, treeview, richeditor, barra de ferramentas,… 18 Jan 2021 Comercial
Componente gráfico totalmente multiplataforma para desenvolvimento VCL, FMX e LCL projetado para dados comerciais, estatísticos, financeiros e científicos. 18 de janeiro de 2021 Comercial
O TMS FNC Blox oferece componentes de diagramação / fluxograma de plataforma cruzada e estrutura cruzada para Windows, iOS, macOS, Android, Linux, Raspbian. 18 de janeiro de 2021 Comercial
Conjunto de componentes de plataforma cruzada de fonte única para desenvolvimento de aplicativos desktop e móvel para Windows, Mac OS-X, iOS e Android 18 de janeiro de 2021 Comercial
TMS FMX Cloud Pack é uma biblioteca de componentes Delphi e C ++ Builder para usar perfeitamente todos os principais serviços em nuvem, como Facebook, Twitter, OneDrive, Google Drive, Amazon cloud drive, LinkedIn, Paypal, Trello, Youtube e muitos mais … 18 de janeiro de 2021 Comercial
Conjunto de componentes poderoso, extenso e flexível para relatórios nativos do Excel e geração e manipulação de arquivos para VCL e FMX.18 Jan 2021 Comercial
Pacote de comunicações que fornece acesso às portas seriais no Windows. A arquitetura orientada a eventos oferece o melhor desempenho possível e permite que todas as ferramentas sejam executadas em segundo plano. 18 de janeiro de 2021 Comercial
Perfilador de código-fonte para medir o tempo de execução de aplicativos de 64 e 32 bits desenvolvidos com Delphi. Gargalos são facilmente encontrados com a ajuda de um visualizador confortável.12 Jan 2021 Freeware
В среду разработки RAD Studio встроен диспетчер пакетов GetIt, который позволяет просматривать, загружать, покупать и устанавливать пакеты. Пакеты предоставляют библиотеки, компоненты, расширения IDE, SDK, стили, образцы и многое другое. Пакеты, доступные в диспетчере пакетов, можно просмотреть на сайте Embarcadero GetIt и установить в среде IDE или через командную строку. Кроме того, последний список новых пакетов, добавленных в диспетчер пакетов GetIt, доступен через RSS-канал .
GetIt может помочь клиентам найти ценные библиотеки и легко установить их в среде IDE, а также упростить миграцию с одной версии RAD Studio на другую, обеспечивая доступность этих библиотек после выпуска, чтобы можно было легко перенести существующий проект. после быстрой загрузки необходимых библиотек.
Вы можете установить пакеты из GetIt через командную строку: GetItCmd
GetIt Package Manager - Version 7.0
Copyright c 2019 Embarcadero Technologies, Inc. All Rights Reserved.
Usage: GetItCmd []:
-install or -i
Install Item[s] separated with ';'
-uninstall or -u
Uninstall Item[s] separated with ';'
-user_name:
User name for proxies with required authentication.
-password:
Password for proxies with required authentication.
-accept_eulas
The user accepts EULA[s] of downloaded package[s].
-verb:[Quiet/Minimal/Normal/Detailed]
Specifies the verbose level for console output messages.
-listavailable:[Filter by substring]
List all avilable packages from package source.
Options:
-filter:[All/Free/Acquired/Installed]. Default[Installed].
-sort:[Name/Vendor]. Default[Name].
-r Custom registry subkey for saving.
В декабре 2020 года в Embarcadero GetIt был добавлен ряд новых или обновленных библиотек. Взгляните!
nrComm Lib предоставляет инструменты для выполнения задач последовательной связи и доступа к устройствам. Он имеет готовые решения для: последовательный rs232 LPT порты, USB, Bluetooth, спрятанный, MODBUS, GSM, SMS и others.29 Jan 2021 Trial
Sempare Template Engine для Delphi позволяет гибко манипулировать текстом. Его можно использовать для создания электронной почты, HTML, исходного кода, XML, конфигурации и т. Д. 28 января 2021 г. GPL v3.0 или Sempare Commercial
TMS VCL WebGMaps — это набор компонентов с широкими возможностями настройки для интеграции Google Maps в Delphi и C ++ Builder. Доступны различные режимы карты и может отображаться дополнительная информация о карте. 20 января 2021 г. Коммерческая
Создавайте современные и многофункциональные приложения Windows быстрее с помощью более 600 компонентов в одном пакете для экономии денег и времени для Delphi и C ++ Builder20 января 2021 г. Коммерческая
TMS VCL Cloud Pack — это библиотека компонентов Delphi и C ++ Builder для беспрепятственного использования всех основных облачных сервисов, таких как Facebook, Twitter, OneDrive, Google Drive, iCloud, облачный накопитель Amazon, LinkedIn, Paypal, Trello, Youtube и многие другие… 20 Январь 2021
Функциональные компоненты построения диаграмм для деловых, статистических, финансовых и научных данных с поддержкой БД и без нее. 20 января 2021 г. Коммерческая
Добавьте максимальную гибкость и мощь в ваши приложения с родными Pascal или Basic сценариями и полным IDE с визуальным дизайнером формы, инспектором объектов и more.20 Jan 2021 Коммерческих
Набор компонентов для разработки настоящих нативных приложений iOS. Без компромиссов: 100% производительность iOS, 100% внешний вид iOS, 100% компоненты iOS 18 января 2021 г. Коммерческая
Используйте один набор элементов управления пользовательского интерфейса для освоения разработки приложений в VCL, FMX и LCL. Включает сетку, планировщик, древовидную структуру, богатый помощник, панель инструментов,… 18 янв 2021 г. Коммерческая
Полностью компонент диаграммы кросс-платформа для разработки VCL, FMX и LCL предназначены для бизнеса, статистической, финансовой и научно — data.18 Jan 2021 Commercial
TMS FNC Blox предлагает кросс-платформенный и кросс-рамочная диаграмм / блок — схем компонентов для Windows, IOS, MacOS, Android, Linux, Raspbian.18 Jan 2021 Commercial
Полностью кроссплатформенный набор компонентов из одного источника для разработки настольных и мобильных приложений для Windows, Mac OS-X, iOS и Android 18 января 2021 г. Коммерческая
TMS FMX Cloud Pack — это библиотека компонентов Delphi и C ++ Builder для беспрепятственного использования всех основных облачных сервисов, таких как Facebook, Twitter, OneDrive, Google Drive, облачный накопитель Amazon, LinkedIn, Paypal, Trello, Youtube и многие другие… 18 января 2021 Коммерческий
Мощный, обширный и гибкий набор компонентов для встроенного отчета Excel, создания файлов и обработки файлов для VCL и FMX.18 января 2021 г. Коммерческая
Пакет связи, обеспечивающий доступ к последовательным портам под Windows. Архитектура событийного обеспечивает максимально возможную производительность и позволяет все инструменты для работы в background.18 января 2021 Коммерческого
Профилировщик исходного кода для измерения времени выполнения 64- и 32-разрядных приложений, разработанных с помощью Delphi. Узкие места легко найти с помощью удобного средства просмотра. 12 января 2021 г. Бесплатное ПО
a: array[0..99] of Single; (* classic static array *)
b: fVector;(* VectorLib vector *)
b := VF_vector(100);
VF_equ1( @a, 100 ); (* set first 100 elements of a = 1.0 *)
VF_equC( b, 100, 3.7 ); (* set first 100 elements of b = 3.7 *)
// ------------------------------
ws: VF_NONLINFITWORKSPACE;
fopt: VF_NONLINFITOPTIONS;
V_getNonlinfitOptions( @fopt );
// at this point, modify fopt as desired...
VF_nonlinfit( ParValues, AStatus, nParameters, X, Y, sz, @ModelFunc, @DerivativeFuncs, @ws, @fopt );
OptiVecは、Dr. Martin Sander Softwareの製品です。この製品のフル機能を利用するためにはDr. Martin Sander Softwareのサイトから購入する必要があります。そしてこの製品に関するサポートは、Dr. Martin Sander Softwareによって提供されます。
The full title should be: What is the purpose of the OldCreateOrder property in a form and how does it affect my todays coding?, but it turned out to be too long for a catchy headline.
Probably you may already have wondered about that OldCreateOrder. It appears not only in forms, but also in some other classes, f.i. in a TDataModule. It was introduced in either Delphi 4 or Delphi 5 – (I can only say it was absent in Delphi 3, but present in Delphi 5).
Basically OldCreateOrder controls when the OnCreate and OnDestroy events are called. In case of a TCustomForm descendant, with OldCreateOrder = True the OnCreate is called inside TCustomForm.Create after the DFM is loaded, while with False the OnCreate is called after all inherited constructors are finished (i.e. in TCustomForm.AfterConstruction).
This has consequences for the way you code. Let me show you the difference with a simple example of a VCL form with a memo and a button. The memo contains some text entered at design time, which is saved in a TStringList instance during FormCreate. The TStringList is create in the constructor and destroyed in the destructor.
A click on the button restores the text to the saved content.
constructor TMainForm.Create(AOwner: TComponent);
begin
inherited;
FDesignText := TStringList.Create;
end;
procedure TMainForm.FormCreate(Sender: TObject);
begin
FDesignText.Assign(edtText.Lines);
end;
destructor TMainForm.Destroy;
begin
FDesignText.Free;
FDesignText := nil;
inherited Destroy;
end;
procedure TMainForm.FormDestroy(Sender: TObject);
begin
FDesignText.Clear;
end;
procedure TMainForm.btnResetClick(Sender: TObject);
begin
edtText.Lines := FDesignText;
end;
When you run this project with OldCreateOrder = False (the default), everything is working as expected (unless you are doing this with a really, really old Delphi version).
Let’s switch OldCreateOrder to True and try again: It will crash with a nil pointer reference in FormCreate.
If we recall what I have written above
with OldCreateOrder = True the OnCreate is called inside TCustomForm.Create
this was to be expected.
constructor TMainForm.Create(AOwner: TComponent);
begin
inherited; // <== here we call FormCreate
FDesignText := TStringList.Create;
end;
The typical workaround in these old days, when OldCreateOrder was the only and standard behavior, was to move the TStringList creation before the inherited call. Luckily this approach works even when OldCreateOrder is False.
Of course, we could as well move the creation on the FormCreate event, but personally I dislike this approach – be it because it just doesn’t look clean to me.
Looking at the other side – Destroy and FormDestroy – things are quite similar, albeit the other way round. With OldCreateOrder = True, the FormDestroy is called inside the inherited call of Destroy, while with OldCreateOrder = False it is called before any destructor is executed.
So, whenever you see a pattern in creating instances before calling a constructors inherited and freeing after that in a destructor, you now know where this habit originates.
Bin ich der einzige, der sich über Neuerscheinungen freut? Wenn Sie sich die RAD Studio Roadmap ansehen, werden Sie sehen, dass eine neue Version 10.4.2 von Sydney für die erste Hälfte des Jahres 2021 geplant ist. Wenn Sie ein Update-Abonnement haben , wurden Sie zur Beta-Version von NDA 10.4.2 Hunter eingeladen . Nun ist Ihre Chance für einen kleinen Einblick in die nächste Hauptversion von RAD Studio. Delphi und C ++ Builder!
Q & A-Protokoll
Dies ist das Q & A-Protokoll aus dem Start-Webinar. Wir haben versucht, es zu bereinigen und unbeabsichtigte persönliche Informationen oder unangemessene Kommentare zu entfernen. Bitte hinterlassen Sie einen Kommentar, wenn Sie etwas bemerken, das entfernt werden muss. Vielen Dank!
¿Soy el único que se emociona con los nuevos lanzamientos? Si observa la hoja de ruta de RAD Studio , verá que una nueva versión 10.4.2 de Sydney está programada para la primera mitad de 2021. Si está en suscripción de actualización, entonces fue invitado a la versión beta de Hunter 10.4.2 de NDA . Bueno, ahora tienes la oportunidad de echar un vistazo a la próxima gran versión de RAD Studio. ¡Delphi y C ++ Builder!
Registro de preguntas y respuestas
Este es el registro de preguntas y respuestas del seminario web de lanzamiento. Intentamos limpiarlo y eliminar cualquier información personal inadvertida o comentarios inapropiados. Deje un comentario si nota algo que debe eliminarse. ¡Gracias!
Sou o único que fica animado com novos lançamentos? Se você olhar o RAD Studio Roadmap , verá que uma nova versão 10.4.2 de Sydney está agendada para o primeiro semestre de 2021. Se você estiver em Update Subscription, então foi convidado para o NDA 10.4.2 Hunter beta . Bem, agora é sua chance de dar uma espiada no próximo grande lançamento do RAD Studio. Delphi e C ++ Builder!
Log de perguntas e respostas
Este é o registro de perguntas e respostas do webinar de lançamento. Tentamos limpar e remover quaisquer informações pessoais inadvertidas ou comentários inadequados. Deixe um comentário se notar algo que precisa ser removido. Obrigado!
Я единственный, кого волнуют новые релизы? Если вы посмотрите на план развития RAD Studio, вы увидите, что новый выпуск Sydney 10.4.2 запланирован на первую половину 2021 года. Если вы подписаны наобновления, значит, вы были приглашены в бета-версию NDA 10.4.2 Hunter . Что ж, теперь у вас есть шанс взглянуть на следующий крупный выпуск RAD Studio. Delphi и C ++ Builder!
Журнал вопросов и ответов
Это журнал вопросов и ответов с вебинара по запуску. Мы попытались очистить его и удалить любую непреднамеренную личную информацию или неуместные комментарии. Пожалуйста, оставьте комментарий, если вы заметили что-то, что нужно удалить. Спасибо!
In this blogpost we showcase a project we recently finished for National Democratic Institute, an NGO that supports democratic institutions and practices worldwide. NDI’s mission is to strengthen political and civic organizations, safeguard elections and promote citizen participation, openness and accountability in government.
Our assignment was to build an MVP of an application that supports the facilitators of a cybersecurity themed interactive simulation game. As this webapp needs to be used by several people on different machines at the same time, it needed real-time synchronization which we implemented using Socket.io.
In the following article you can learn more about how we approached the project, how we structured the data access layer and how we solved challenges around creating our websocket server, just to mention a few. The final code of the project is open-source, and you’re free to check it out on Github.
A Brief Overview of the CyberSim Project
Political parties are at extreme risk to hackers and other adversaries, however, they rarely understand the range of threats they face. When they do get cybersecurity training, it’s often in the form of dull, technically complicated lectures. To help parties and campaigns better understand the challenges they face, NDI developed a cybersecurity simulation (CyberSim) about a political campaign rocked by a range of security incidents. The goal of the CyberSim is to facilitate buy-in for and implementation of better security practices by helping political campaigns assess their own readiness and experience the potential consequences of unmitigated risks.
The CyberSim is broken down into three core segments: preparation, simulation, and an after action review. During the preparation phase, participants are introduced to a fictional (but realistic) game-play environment, their roles, and the rules of the game. They are also given an opportunity to select security-related mitigations from a limited budget, providing an opportunity to "secure their systems" to the best of their knowledge and ability before the simulation begins.
The simulation itself runs for 75 minutes, during which time the participants have the ability to take actions to raise funds, boost support for their candidate and, most importantly, respond to events that occur that may negatively impact their campaign's success. These events are meant to test the readiness, awareness and skills of the participants related to information security best practices. The simulation is designed to mirror the busyness and intensity of a typical campaign environment.
The after action review is in many ways the most critical element of the CyberSim exercise. During this segment, CyberSim facilitators and participants review what happened during the simulation, what events lead to which problems during the simulation, and what actions the participants took (or should have taken) to prevent security incidents from occurring. These lessons are closely aligned with the best practices presented in the Cybersecurity Campaigns Playbook, making the CyberSim an ideal opportunity to reinforce existing knowledge or introduce new best practices presented there.
Since data representation serves as the skeleton of each application, Norbert - who built part of the app will first walk you through the data layer created using knex and Node.js. Then he will move to the program's hearth, the socket server that manages real-time communication.
This is going to be a series of articles, so in the next part, we will look at the frontend, which is built with React. Finally, in the third post, Norbert will present the muscle that is the project's infrastructure. We used Amazon's tools to create the CI/CD, host the webserver, the static frontend app, and the database.
Now that we're through with the intro, you can enjoy reading this Socket.io tutorial / Case Study from Norbert:
The Project's Structure
Before diving deep into the data access layer, let's take a look at the project's structure:
As you can see, the structure is relatively straightforward, as we’re not really deviating from a standard Node.js project structure. To better understand the application, let’s start with the data model.
The Data Access Layer
Each game starts with a preprogrammed poll percentage and an available budget. Throughout the game, threats (called injections) occur at a predefined time (e.g., in the second minute) to which players have to respond. To spice things up, the staff has several systems required to make responses and take actions. These systems often go down as a result of injections. The game's final goal is simple: the players have to maximize their party's poll by answering each threat.
We used a PostgreSQL database to store the state of each game. Tables that make up the data model can be classified into two different groups: setup and state tables. Setup tables store data that are identical and constant for each game, such as:
injections - contains each threat player face during the game, e.g., Databreach
injection responses - a one-to-many table that shows the possible reactions for each injection
action - operations that have an immediate on-time effect, e.g., Campaign advertisement
systems - tangible and intangible IT assets, which are prerequisites of specific responses and actions, e.g., HQ Computers
mitigations - tangible and intangible assets that mitigate upcoming injections, e.g., Create a secure backup for the online party voter database
roles - different divisions of a campaign party, e.g., HQ IT Team
curveball events - one-time events controlled by the facilitators, e.g., Banking system crash
On the other hand, state tables define the state of a game and change during the simulation. These tables are the following:
game - properties of a game like budget, poll, etc.
game systems - stores the condition of each system (is it online or offline) throughout the game
game mitigations - shows if players have bought each mitigation
game injection - stores information about injections that have happened, e.g., was it prevented, responses made to it
game log
To help you visualize the database schema, have a look at the following diagram. Please note that the game_log table was intentionally left from the image since it adds unnecessary complexity to the picture and doesn’t really help understand the core functionality of the game:
To sum up, state tables always store any ongoing game's current state. Each modification done by a facilitator must be saved and then transported back to every coordinator. To do so, we defined a method in the data access layer to return the current state of the game by calling the following function after the state is updated:
// ./src/game.js
const db = require('./db');
const getGame = (id) =>
db('game')
.select(
'game.id',
'game.state',
'game.poll',
'game.budget',
'game.started_at',
'game.paused',
'game.millis_taken_before_started',
'i.injections',
'm.mitigations',
's.systems',
'l.logs',
)
.where({ 'game.id': id })
.joinRaw(
`LEFT JOIN (SELECT gm.game_id, array_agg(to_json(gm)) AS mitigations FROM game_mitigation gm GROUP BY gm.game_id) m ON m.game_id = game.id`,
)
.joinRaw(
`LEFT JOIN (SELECT gs.game_id, array_agg(to_json(gs)) AS systems FROM game_system gs GROUP BY gs.game_id) s ON s.game_id = game.id`,
)
.joinRaw(
`LEFT JOIN (SELECT gi.game_id, array_agg(to_json(gi)) AS injections FROM game_injection gi GROUP BY gi.game_id) i ON i.game_id = game.id`,
)
.joinRaw(
`LEFT JOIN (SELECT gl.game_id, array_agg(to_json(gl)) AS logs FROM game_log gl GROUP BY gl.game_id) l ON l.game_id = game.id`,
)
.first();
The const db = require('./db'); line returns a database connection established via knex, used for querying and updating the database. By calling the function above, the current state of a game can be retrieved, including each mitigation already purchased and still available for sale, online and offline systems, injections that have happened, and the game's log. Here is an example of how this logic is applied after a facilitator triggers a curveball event:
As you can examine, after the update on the game's state happens, which this time is a change in budget and poll, the program calls the getGame function and returns its result. By applying this logic, we can manage the state easily. We have to arrange each coordinator of the same game into groups, somehow map each possible event to a corresponding function in the models folder, and broadcast the game to everyone after someone makes a change. Let's see how we achieved it by leveraging WebSockets.
Creating Our Real-Time Socket.io Server with Node.js
As the software we’ve created is a companion app to an actual tabletop game played at different locations, it is as real time as it gets. To handle such use cases, where the state of the UI-s needs to be synchronized across multiple clients, WebSockets are the go-to solution. To implement the WebSocket server and client, we chose to use Socket.io. While Socket.io clearly comes with a huge performance overhead, it freed us from a lot of hassle that arises from the stafeful nature of WebSocket connections. As the expected load was minuscule, the overhead Socket.io introduced was way overshadowed by the savings in development time it provided. One of the killer features of Socket.io that fit our use case very well was that operators who join the same game can be separated easily using socket.io rooms. This way, after a participant updates the game, we can broadcast the new state to the entire room (everyone who currently joined a particular game).
To create a socket server, all we need is a Server instance created by the createServer method of the default Node.js http module. For maintainability, we organized the socket.io logic into its separate module (see: .src/socketio.js). This module exports a factory function with one argument: an http Server object. Let's have a look at it:
// ./src/socketio.js
const socketio = require('socket.io');
const SocketEvents = require('./constants/SocketEvents');
module.exports = (http) => {
const io = socketio(http);
io.on(SocketEvents.CONNECT, (socket) => {
socket.on('EVENT', (input) => {
// DO something with the given input
})
}
}
As you can see, the socket server logic is implemented inside the factory function. In the index.js file then this function is called with the http Server. We didn't have to implement authorization during this project, so there isn't any socket.io middleware that authenticates each client before establishing the connection. Inside the socket.io module, we created an event handler for each possible action a facilitator can perform, including the documentation of responses made to injections, buying mitigations, restoring systems, etc. Then we mapped our methods defined in the data access layer to these handlers.
Bringing together facilitators
I previously mentioned that rooms make it easy to distinguish facilitators by which game they currently joined in. A facilitator can enter a room by either creating a fresh new game or joining an existing one. By translating this to "WebSocket language", a client emits a createGame or joinGame event. Let's have a look at the corresponding implementation:
If you examine the code snippet above, the gameId variable contains the game's id, the facilitators currently joined. By utilizing the javascript closures, we declared this variable inside the connect callback function. Hence the gameId variable will be in all following handlers' scope. If an organizer tries to create a game while already playing (which means that gameId is not null), the socket server first kicks the facilitator out of the previous game's room then joins the facilitator in the new game room. This is managed by the leave and join methods. The process flow of the joinGame handler is almost identical. The only keys difference is that this time the server doesn't create a new game. Instead, it queries the already existing one using the infamous getGame method of the data access layer.
What Makes Our Event Handlers?
After we successfully brought together our facilitators, we had to create a different handler for each possible event. For the sake of completeness, let's look at all the events that occur during a game:
createGame, joinGame: these events' single purpose is to join the correct game room organizer.
startSimulation, pauseSimulation, finishSimulation: these events are used to start the event's timer, pause the timer, and stop the game entirely. Once someone emits a finishGame event, it can't be restarted.
deliverInjection: using this event, facilitators trigger security threats, which should occur in a given time of the game.
respondToInjection, nonCorrectRespondToInjection: these events record the responses made to injections.
restoreSystem: this event is to restore any system which is offline due to an injection.
changeMitigation: this event is triggered when players buy mitigations to prevent injections.
performAction: when the playing staff performs an action, the client emits this event to the server.
performCurveball: this event occurs when a facilitator triggers unique injections.
These event handlers implement the following rules:
They take up to two arguments, an optional input, which is different for each event, and a predefined callback. The callback is an exciting feature of socket.io called acknowledgment. It lets us create a callback function on the client-side, which the server can call with either an error or a game object. This call will then affect the client-side. Without diving deep into how the front end works (since this is a topic for another day), this function pops up an alert with either an error or a success message. This message will only appear for the facilitator who initiated the event.
They update the state of the game by the given inputs according to the event's nature.
They broadcast the new state of the game to the entire room. Hence we can update the view of all organizers accordingly.
First, let's build on our previous example and see how the handler implemented the curveball events.
The curveball event handler takes one input, a curveballId and the callback as mentioned earlier. The performCurveball method then updates the game's poll and budget and returns the new game object. If the update is successful, the socket server emits a gameUpdated event to the game room with the latest state. Then it calls the callback function with the game object. If any error occurs, it is called with an error object.
After a facilitator creates a game, first, a preparation view is loaded for the players. In this stage, staff members can spend a portion of their budget to buy mitigations before the game starts. Once the game begins, it can be paused, restarted, or even stopped permanently. Let's have a look at the corresponding implementation:
The startSimulation kicks the game's timer, and the pauseSimulation method pauses and stops the game. Trigger time is essential to determine which injection facilitators can invoke. After organizers trigger a threat, they hand over all necessary assets to the players. Staff members can then choose how they respond to the injection by providing a custom response or choosing from the predefined options. Next to facing threats, staff members perform actions, restore systems, and buy mitigations. The corresponding events to these activities can be triggered anytime during the game. These event handlers follow the same pattern and implement ourthe three fundamental rules. Please check the public GitHub repo if you would like to examine these callbacks.
Serving The Setup Data
In the chapter explaining the data access layer, I classified tables into two different groups: setup and state tables. State tables contain the condition of ongoing games. This data is served and updated via the event-based socket server. On the other hand, setup data consists of the available systems, game mitigations, actions, and curveball events, injections that occur during the game, and each possible response to them. This data is exposed via a simple http server. After a facilitator joins a game, the React client requests this data and caches and uses it throughout the game. The HTTP server is implemented using the express library. Let's have a look at our app.js.
As you can see, everything is pretty standard here. We didn't need to implement any method other than GET since this data is inserted and changed using seeds.
Final Thoughts On Our Socket.io Game
Now we can put together how the backend works. State tables store the games' state, and the data access layer returns the new game state after each update. The socket server organizes the facilitators into rooms, so each time someone changes something, the new game is broadcasted to the entire room. Hence we can make sure that everyone has an up-to-date view of the game. In addition to dynamic game data, static tables are accessible via the http server.
Next time, we will look at how the React client manages all this, and after that I'll present the infrastructure behind the project. You can check out the code of this app in the public GitHub repo!
In case you're looking for experienced full-stack developers, feel free to reach out to us via info@risingstack.com, or via using the form below this article.
Heute ist Delphi 26 Jahre alt. Eine sehr lange Zeit… Viele Dinge haben sich geändert, einige mehr als andere. Hier sind meine 26 Picks!
Am 14. Februar 1995 stellte Borland ein neues Tool für Entwickler vor, das viel Begeisterung auslöste und seit über 26 Jahren zum Erstellen von Anwendungen verwendet wird, die von Milliarden von Menschen verwendet werden (denken Sie an das gute alte Skype) und für das es heute noch verwendet wird Erstellen von Apps für viele unglaublich unterschiedliche Aufgaben. Dafür veranstalten wir ein Schaufenster. Aber hier möchte ich nicht den Starttag (Sie können sich auf meine alte Geburtstagsseite beziehen) oder das Schaufenster behandeln, sondern darüber nachdenken, wie sich die Dinge im Laufe der Jahre verändert haben und wie einige ihren ursprünglichen Wert beibehalten haben.
Ich habe 13 Bereiche ausgewählt, die für jedes der beiden Bilder (eines vor 26 Jahren und eines für heute) insgesamt 26 Bilder darstellen!
1. Windows im Jahr 1995
Als Delphi 1995 veröffentlicht wurde, war Windows 3.1 (zusammen mit Windows 3.11 mit Netzwerkunterstützung) das am häufigsten verwendete PC-Betriebssystem, das hier auf einer VM ausgeführt wird:
2. Windows im Jahr 2021
Dies ist Windows 10, die Version, die derzeit auf meinem primären Desktop-PC installiert ist. Es hat sich ziemlich verändert… und auch die Hardware-Leistung des Computers.
3. Delphi 1 Look and Feel
Diese Benutzeroberfläche der Delphi IDE der ursprünglichen Version vor 26 Jahren
4. Die Delphi 10.4.1 IDE
So sieht Delphi heute aus (mit dem guten alten hellen Stil, den ich normalerweise verwende, weiß ich, dass andere den dunklen Stil bevorzugen):
5. Das Web wurde gestartet
Das Internet war gerade in Fahrt gekommen, und das beliebteste Online-Forum für Delphi war Compuserve – ich weiß, etwas, das nur ältere Entwickler verstehen – es war keine Website, es war für einige das gesamte Online-Erlebnis. Eine Google-Suche gibt Folgendes zurück:
6. Das Web ist jetzt überall
Obwohl es offensichtlich ist, wie wir uns auf das Internet und das Web verlassen, wäre es schwierig gewesen, dies vorherzusagen. Unten finden Sie einige Daten von https://www.internetlivestats.com/ :
7. Handys für Telefonanrufe viel anderes
Ich glaube nicht, dass ich 1995 ein Mobiltelefon besaß, mein erstes war ein Nokia einige Jahre später. Ein Telefon war zu dieser Zeit so ( Ericsson GH688, CC BY 3.0 ):
8. Smartphones sind leistungsstärker als die Computer, die wir hatten
Ohne Telefon können wir heute kaum noch leben. Und Telefone sind in den meisten Fällen Mehrkerncomputer mit mehr Speicher als der PC damals. Und sie können Delphi-Anwendungen ausführen! Einige typische Apps (das ist mein Handy):
9. Ein Fenster war ein TForm in Delphi 1
Seit den Anfängen kapselt ein Delphi TForm (wie andere TWinControl-Klassen) ein Windows-Handle aus user.dll und Formularoperationen rufen die Windows-API auf und lösen Systemnachrichten aus. Delphi ist visuell (siehe unten), verfügt jedoch über eine OOP-Kernarchitektur – ein Anwendungsformular, das von der TForm-Basisklasse geerbt wird:
10. Ein Fenster ist immer noch ein TForm (oder tatsächlich 2, VCL + FMX)
Noch heute ist ein Formular die Grundlage für Anwendungen, sei es VCL (siehe ganz unten in der Definition der Basisklasse) oder FireMonkey. In diesem Fall werden die Formulare einem UI-Element von Windows, MacOS, iOS, Android oder Linux zugeordnet:
11. Videospiele wurden gestartet
Die Videospielbranche befand sich ebenfalls in den Anfängen (vom Game Art HQ ):
Dies ist der Code, den Sie 1995 schreiben konnten, um Zahlen in Delphi und die daraus resultierende einfache Anwendung zu zählen:
14. Bis 26 zu zählen ist heute in Delphi nicht viel anders
Wir können heute genau denselben Code schreiben und kompilieren, sowohl in VCL für Windows als auch in FireMonkey für Desktop und Mobile. Wir können aber auch die neuen Funktionen der Delphi-Sprache nutzen, um wie folgt zu schreiben:
15. Daten waren Paradox, DBase, Clipper, FoxPro
Delphi verdankt diesen Namen seiner Fähigkeit, mit Datenbanken (Oracle + Delphi) zu kommunizieren. Und es gibt einen Assistenten, der das Erstellen einer Datenbankanwendung vereinfacht (wir bringen etwas Ähnliches zurück!).
16. Daten sind Oracle, SQL Server, Azure, AWS, REST-APIs und alle anderen
Heute können Sie FireDAC und viele andere Bibliotheken verwenden, um auf Daten in Delphi zuzugreifen. Daten befinden sich aber nicht mehr nur in Datenbanken. Vor ein paar Tagen habe ich über das Abrufen von Rest-API-Daten über Delphis REST-Debugger gebloggt (siehe meinen letzten Blog-Beitrag ).
17. Das bin ich 1995 (Tage nach dem Start von Delphi)
Ich habe einen sehr kurzen Ausflug nach Bobbio gemacht (es ist weniger als eine Autostunde, während einer Pandemie kann man nicht viel mehr machen) – Bild von Benny Cantu :
19. RAD war eine Revolution
Delphi bot (und bietet immer noch) eine einzigartige Kombination aus schnellem visuellem Design (wie zuvor VB) und einem robusten OOP-Framework, mit dem Komponenten in derselben Umgebung und auf nahtlose Weise verklagt und geschrieben werden können. Hier ist eine Anzeige der frühen Tage:
20. Delphi macht die Entwicklung immer noch schnell
21. Bücher waren etwas sehr Wichtiges, da Sie keinen Klassennamen googeln oder nach Stack Overflow fragen konnten. Hier sind einige meiner frühen Delphi-Bücher:
22. Bücher sind immer noch eine Sache, gedruckt oder E-Books
Der Markt für technische Bücher ist viel kleiner und sehr unterschiedlich, aber Bücher werden immer noch gedruckt (und viele in letzter Zeit auf Delphi). Dies ist meine neueste Version, die noch in gedruckter Form veröffentlicht werden muss:
23. VCL war die beste Bibliothek für WinAPI
Keine andere Klassenbibliothek der damaligen Zeit war so gut in die Windows-API integriert. Aber MFC und WinForms von Microsoft kamen der VCL-Qualität und Vollständigkeit nie nahe. Dies ist eine Hierarchie der Bibliothek (aber nicht für Delphi 1, für Delphi 7 viel später):
24. VCL ist die beste Bibliothek für WinAPI, COM-Integration, WinRT und bald auch für Project Reunion
Die Bibliothek wird ständig erweitert. Ab heute werden Windows-APIs, COM- und Shell-Objekte sowie WinRT-Plattform-APIs eingeschlossen. Und wir fügen ständig neue Komponenten hinzu und ordnen sie neuen APIs zu. Die VCL umfasst bereits Funktionen von Microsoft Project Reunion und weitere werden folgen. Hier ist eine gestaltete VCL-App. Es ist sehr einfach, vorhandene Anwendungen zu übernehmen und sie in einem Bruchteil der Zeit eines Umschreibens modern aussehen zu lassen:
25. Delphi hat Spaß gemacht
Spaß für Entwickler, nett und angenehm. Und in Delphi 1 gab es ein Osterei mit dem Delphi-Spracharchitekten Anders Hejlsberg :
26. Delphi macht Spaß
Delphi macht auch heute noch Spaß, hat eine aktive Community und eine Reihe hochtalentierter MVPs. Die letzte Version hat ein Osterei, das das 25-jährige Jubiläum des letzten Jahres zeigt :
Und um 26 Jahre zu feiern, bietet Embarcadero 26% Rabatt!
26 Bilder, um die Geschichte von Delphi zu erzählen. Seien Sie gespannt auf ein neues Kapitel der Geschichte, das in Kürze erscheint. Und hilf uns zu feiern.
In der Zwischenzeit können Sie aber auch ein großartiges Angebot nutzen und Delphi mit 26% Rabatt kaufen, um das Jubiläum zu feiern !
Hoy es el 26º aniversario de Delphi. Mucho tiempo… Han cambiado muchas cosas, unas más que otras. ¡Aquí están mis 26 selecciones!
El 14 de febrero de 1995, Borland presentó una nueva herramienta para desarrolladores, que despertó mucho entusiasmo y durante 26 años se ha utilizado para crear aplicaciones utilizadas por miles de millones de personas (piense en el viejo Skype) y todavía se utiliza hoy en día para crear aplicaciones para muchas tareas increíblemente diferentes. Estamos organizando un escaparate para eso. Pero aquí no quiero cubrir el día del lanzamiento (puede referirse a mi antiguo sitio de cumpleaños) o el escaparate, sino repasar cómo han cambiado las cosas a lo largo de los años y cómo algunos han mantenido su valor original.
Elegí 13 áreas, presentando para cada una las dos imágenes (una para hace 26 años y otra para hoy), ¡para un total de 26 imágenes!
1. Windows en 1995
Cuando se lanzó Delphi en 1995, el sistema operativo de PC más utilizado era Windows 3.1 (junto con Windows 3.11, con soporte de red), que aquí se ejecuta en una máquina virtual:
2. Windows en 2021
Esta es Windows 10, la versión actualmente instalada en mi PC de escritorio principal. Ha cambiado bastante … y también la potencia del hardware de la computadora.
3. Aspecto y sensación de Delphi 1
Esta interfaz de usuario del IDE de Delphi del lanzamiento original hace 26 años
4. El IDE de Delphi 10.4.1
Así es como se ve Delphi hoy (con el buen estilo de luz antiguo que generalmente uso, sé que otros prefieren el estilo oscuro):
5. La Web estaba comenzando
Internet estaba comenzando a funcionar, y el foro en línea más popular para Delphi estaba en Compuserve, lo sé, algo que solo los desarrolladores mayores entienden, no era un sitio web, era la experiencia en línea completa para algunos. Esto es lo que devuelve una búsqueda de Google:
6. La Web ahora está en todas partes
Si bien parece obvio cómo dependemos de Internet y la Web, habría sido difícil de predecir. Vea algunos datos a continuación de https://www.internetlivestats.com/ :
7. Teléfonos móviles para llamadas telefónicas mucho más
No creo que tuviera un teléfono móvil en 1995, el primero fue un Nokia unos años después. Un teléfono en ese momento era así ( Ericsson GH688, CC BY 3.0 ):
8. Los teléfonos inteligentes son más potentes que las computadoras que teníamos
Hoy difícilmente podemos vivir sin un teléfono. Y los teléfonos son, en la mayoría de los casos, computadoras de varios núcleos, con más memoria que la que tenía la PC en ese entonces. ¡Y pueden ejecutar aplicaciones Delphi! Algunas aplicaciones típicas (bueno, ese es mi teléfono):
9. Una ventana era una TForm en Delphi 1
Desde los primeros días, un TForm de Delphi (como otras clases TWinControl) encapsula un identificador de Windows de user.dll y las operaciones de formulario llaman a la API de Windows y activan mensajes del sistema. Delphi es visual (ver más abajo) pero tiene una arquitectura OOP central: un formulario de solicitud hereda de la clase TForm base:
10. Una ventana sigue siendo un TForm (o en realidad 2, VCL + FMX)
Hoy en día, un formulario sigue siendo la base de las aplicaciones, ya sea VCL (ver más abajo el comienzo de la definición de la clase base) o FireMonkey, en cuyo caso los formularios se asignan a un elemento de la interfaz de usuario de Windows, macOS, iOS, Android o Linux:
11. Los videojuegos estaban comenzando
La industria de los videojuegos también estaba en sus inicios (de Game Art HQ ):
12. Los videojuegos y los juegos en línea son enormes
Este es el código que podría escribir en 1995 para contar números en Delphi y la aplicación simple resultante:
14. Contar hasta 26 no es muy diferente hoy en Delphi
Hoy podemos escribir y compilar exactamente el mismo código, tanto en VCL para Windows como en FireMonkey para escritorio y móvil. Pero también podemos aprovechar las nuevas funciones del lenguaje Delphi para escribir como se muestra a continuación:
15. Los datos eran Paradox, DBase, Clipper, FoxPro
Delphi debe este nombre a su capacidad para comunicarse con bases de datos (Oracle + Delphi). Y tiene un asistente para facilitar la creación de una aplicación de base de datos (¡estamos trayendo algo similar!)
16. Los datos son Oracle, SQL Server, Azure, AWS, API REST y todo
Hoy puede usar FireDAC y muchas otras bibliotecas para acceder a los datos en Delphi. Pero los datos ya no están solo en bases de datos. Hace unos días escribí en un blog sobre la obtención de datos de la API de descanso a través del depurador REST de Delphi (consulte mi publicación de blog reciente )
17. Este soy yo en 1995 (días después del lanzamiento de Delphi)
Hice un viaje muy corto a Bobbio (es menos de una hora en automóvil, no se puede hacer mucho más durante una pandemia) – foto de Benny Cantu :
19. RAD fue una revolución
Delphi ofreció (y aún ofrece) una combinación única de diseño visual rápido (como VB antes que él) y un marco de programación orientado a objetos robusto, lo que permite demandar y escribir componentes en el mismo entorno y de manera transparente. Aquí hay un anuncio de los primeros días:
21. Los libros eran algo muy importante, ya que no se podía buscar en Google un nombre de clase o preguntar en Stack Overflow. Estos son algunos de mis primeros libros de Delphi:
22. Los libros siguen siendo una cosa, impresos o ebooks
El mercado de libros técnicos es mucho más pequeño y muy diferente, pero los libros todavía se imprimen (y muchos en Delphi recientemente). Este es mi último, aún por publicar en forma impresa:
23. VCL fue la mejor biblioteca para WinAPI
Ninguna otra biblioteca de clases de la época estaba tan bien integrada con la API de Windows. Pero MFC y WinForms de Microsoft nunca se acercaron ni siquiera a la calidad e integridad de VCL. Esta es una jerarquía de la biblioteca (pero no para Delphi 1, para Delphi 7 mucho más tarde):
24. VCL es la mejor biblioteca para WinAPI, integración COM, WinRT y pronto Project Reunion
La biblioteca sigue expandiéndose, a partir de hoy incluye las API de Windows, los objetos COM y shell, las API de la plataforma WinRT. Y seguimos agregando nuevos componentes y mapeo a nuevas API. El VCL ya incluye características de Microsoft Project Reunion y vendrán más. Aquí hay una aplicación VCL con estilo, es muy fácil tomar aplicaciones existentes y hacer que se vean modernas en una fracción del tiempo de una reescritura:
25. Delphi fue divertido de usar
Diversión para desarrolladores, agradable y agradable. Y en Delphi 1 hubo un huevo de Pascua con el arquitecto de lenguaje de Delphi, Anders Hejlsberg :
26. Delphi es divertido de usar
Delphi sigue siendo divertido de usar hoy en día, tiene una comunidad activa y varios MVP de gran talento. La última versión tiene un huevo de Pascua que muestra el 25 aniversario del año pasado :
¡Y para celebrar 26 años, Embarcadero ofrece un 26% de descuento!
26 imágenes para contar la historia de Delphi hasta ahora. Estén atentos para un nuevo capítulo de la historia próximamente. Y ayúdanos a celebrar.
Pero mientras tanto, ¡también puedes aprovechar una gran oferta y comprar Delphi con un 26% de descuento para celebrar el aniversario !
Hoje é o 26º aniversário da Delphi. Muito tempo … Muitas coisas mudaram, umas mais que outras. Aqui estão minhas 26 escolhas!
Em 14 de fevereiro de 1995, a Borland apresentou uma nova ferramenta para desenvolvedores, que despertou muito entusiasmo e mais de 26 anos foi usada para construir aplicativos usados por bilhões de pessoas (pense no bom e velho Skype) e ainda é usado hoje para construir aplicativos para muitas tarefas incrivelmente diferentes. Estamos fazendo uma vitrine para isso. Mas aqui não quero cobrir o dia do lançamento (você pode consultar meu antigo site de aniversários) ou a vitrine, mas sim repassar como as coisas mudaram ao longo dos anos e como algumas mantiveram seu valor original.
Eu escolhi 13 áreas, apresentando para cada uma as duas imagens (uma de 26 anos atrás e outra de hoje), para um total de 26 fotos!
1. Windows em 1995
Quando o Delphi foi lançado em 1995, o sistema operacional para PC mais comumente usado era o Windows 3.1 (junto com o Windows 3.11, com suporte de rede), aqui rodando em uma VM:
2. Windows em 2021
Este é o Windows 10, a versão atualmente instalada em meu PC de mesa principal. Mudou bastante … e também a potência do hardware do computador.
3. Delphi 1 aparência e comportamento
Esta interface de usuário do Delphi IDE da versão original de 26 anos atrás
4. O IDE Delphi 10.4.1
É assim que o Delphi se parece hoje (com o bom e velho estilo light que geralmente uso, sei que outros preferem o estilo escuro):
5. A Web estava começando
A Internet estava apenas começando, e o fórum online mais popular para Delphi estava na Compuserve – eu sei, algo que apenas desenvolvedores mais antigos entendem – não era um site, era toda a experiência online para alguns. Aqui está o que uma pesquisa do Google retorna:
6. A Web agora está em todo lugar
Embora pareça óbvio como dependemos da Internet e da Web, seria difícil prever. Veja alguns dados abaixo em https://www.internetlivestats.com/ :
7. Telefones celulares para chamadas, muito mais
Acho que não tinha um celular em 1995, mas meu primeiro foi um Nokia alguns anos depois. Na época, um telefone era assim ( Ericsson GH688, CC BY 3.0 ):
8. Smart phones são mais poderosos do que os computadores que tínhamos
Hoje dificilmente podemos viver sem um telefone. E os telefones são, na maioria dos casos, computadores multi-core, com mais memória do que o PC tinha naquela época. E eles podem rodar aplicativos Delphi! Alguns aplicativos típicos (bem, esse é o meu telefone):
9. Uma janela era um TForm no Delphi 1
Desde os primeiros dias, um Delphi TForm (como outras classes TWinControl) encapsula um identificador do Windows de user.dll e as operações de formulário chamam a API do Windows e disparam mensagens do sistema. Delphi é visual (veja abaixo), mas tem uma arquitetura OOP central – um formulário de aplicação herda da classe base TForm:
10. Uma janela ainda é um TForm (ou, na verdade, 2, VCL + FMX)
Hoje, um formulário ainda é a base dos aplicativos, seja VCL (veja abaixo o início da definição da classe base) ou FireMonkey, caso em que o formulário mapeia para um elemento de IU do Windows, macOS, iOS, Android ou Linux:
11. Os videogames estavam começando
A indústria de videogames também estava começando (a partir do Game Art HQ ):
12. Vídeo e jogos online são enormes
Aqui está um novo jogo para celular escrito em Delphi a partir desta postagem do blog Embarcadero (observe, está no IDE)
13. Contando até 26 no Delphi 1
Este é o código que você poderia escrever em 1995 para contar números em Delphi e o aplicativo simples resultante:
14. Contar até 26 não é muito diferente hoje em Delphi
Podemos escrever e compilar exatamente o mesmo código hoje, tanto em VCL para Windows ou FireMonkey para desktop e móvel. Mas também podemos aproveitar os novos recursos da linguagem Delphi para escrever como a seguir:
15. Os dados eram Paradox, DBase, Clipper, FoxPro
Delphi deve este nome à sua habilidade de se comunicar com bancos de dados (Oracle + Delphi). E tem um assistente para facilitar a criação de um aplicativo de banco de dados (estamos trazendo de volta algo semelhante!)
16. Os dados são Oracle, SQL Server, Azure, AWS, APIs REST e tudo mais
Hoje você pode usar o FireDAC e muitas outras bibliotecas para acessar dados no Delphi. Mas os dados não estão mais apenas em bancos de dados. Há alguns dias, escrevi sobre como buscar dados de API restantes por meio do depurador REST do Delphi (veja minha postagem recente no blog )
17. Este sou eu em 1995 (dias após o lançamento do Delphi)
Fiz uma viagem muito curta para Bobbio (é menos de uma hora de carro, você não pode fazer muito mais durante uma pandemia) – foto de Benny Cantu :
19. RAD foi uma revolução
A Delphi ofereceu (e ainda oferece) uma combinação única de design visual rápido (como o VB antes dele) e uma estrutura OOP robusta, permitindo processar e escrever componentes no mesmo ambiente e de maneira contínua. Aqui está um anúncio dos primeiros dias:
21. Livros eram algo muito importante, já que você não podia pesquisar no Google o nome de uma classe ou perguntar no Stack Overflow. Aqui estão alguns dos meus primeiros livros Delphi:
22. Os livros ainda são uma coisa, impressos ou e-books
O mercado de livros técnicos é muito menor e muito diferente, mas os livros ainda são impressos (e muitos na Delphi recentemente). Este é o meu último, ainda a ser publicado na imprensa:
23. VCL foi a melhor biblioteca para WinAPI
Nenhuma outra biblioteca de classes da época era tão bem integrada com a API do Windows. Mas o MFC e o WinForms da Microsoft nunca chegaram nem perto da qualidade e integridade do VCL. Esta é uma hierarquia da biblioteca (mas não para Delphi 1, para Delphi 7 muito mais tarde):
24. VCL é a melhor biblioteca para WinAPI, integração COM, WinRT e em breve Project Reunion
A biblioteca continua se expandindo, a partir de hoje envolve APIs do Windows, COM e objetos de shell, APIs da plataforma WinRT. E continuamos adicionando novos componentes e mapeando para novas APIs. O VCL já inclui recursos do Microsoft Project Reunion e mais virão. Aqui está um aplicativo VCL estilizado, é muito fácil pegar aplicativos existentes e torná-los modernos em uma fração do tempo de uma reescrita:
25. Delphi foi divertido de usar
Diversão para desenvolvedores, agradável e agradável. E no Delphi 1 havia um ovo de Páscoa com o arquiteto de linguagem Delphi, Anders Hejlsberg :
26. Delphi é divertido de usar
O Delphi ainda é divertido de usar hoje, tem uma comunidade ativa, uma série de MVPs altamente talentosos. A última versão tem um ovo de Páscoa apresentando o 25º aniversário do ano passado :
E para comemorar 26 anos a Embarcadero está oferecendo 26% de desconto!
26 imagens para contar a história da Delphi até o momento. Fique ligado em um novo capítulo da história em breve. E ajude-nos a comemorar.
Mas enquanto isso, você também pode aproveitar uma grande oferta e comprar Delphi com um desconto de 26% para comemorar o aniversário !
Сегодня исполняется 26 лет компании Delphi. Очень давно… Многие вещи изменились, некоторые больше, другие. Вот мои 26 выборов!
14 февраля 1995 года Borland представила новый инструмент для разработчиков, который вызвал большой энтузиазм и более 26 лет использовался для создания приложений, используемых миллиардами людей (вспомните старый добрый Skype), и он все еще используется сегодня для создание приложений для множества невероятно разных задач. У нас есть витрина для этого. Но здесь я не хочу рассказывать о дне запуска (вы можете сослаться на мой старый сайт дней рождения) или о витрине, а скорее о том, как все изменилось с годами и как некоторые из них сохранили свою первоначальную ценность.
Я выбрал 13 областей, представив для каждой два изображения (одно за 26 лет назад и одно на сегодняшний день), всего 26 изображений!
1. Windows 1995 г.
Когда в 1995 году был выпущен Delphi, наиболее часто используемой операционной системой для ПК была Windows 3.1 (вместе с Windows 3.11 с поддержкой сети), здесь работающая на виртуальной машине:
2. Окна в 2021 году
Это Windows 10, версия, которая сейчас установлена на моем основном настольном ПК. Он немного изменился… а также мощность оборудования компьютера.
3. Внешний вид Delphi 1
Этот пользовательский интерфейс Delphi IDE первоначального выпуска 26 лет назад
4. Среда разработки Delphi 10.4.1.
Вот как выглядит Delphi сегодня (со старым добрым светлым стилем, который я обычно использую, я знаю, что другие предпочитают темный стиль):
5. Интернет начинался
Интернет только зарождался, и самый популярный онлайн-форум для Delphi был на Compuserve — я знаю, это понимают только старые разработчики — это не был веб-сайт, для некоторых это был весь онлайн-опыт. Вот что возвращает поиск Google:
6. Интернет теперь повсюду
Хотя кажется очевидным, как мы полагаемся на Интернет и Интернет, это было бы трудно предсказать. См. Некоторые данные ниже с https://www.internetlivestats.com/ :
7. Мобильные телефоны для телефонных звонков и многое другое.
Не думаю, что в 1995 году у меня был мобильный телефон. Моим первым телефоном стала Nokia несколько лет спустя. Телефон в то время был таким ( Ericsson GH688, CC BY 3.0 ):
8. Смартфоны мощнее компьютеров, которые у нас были.
Сегодня мы не можем жить без телефона. А телефоны в большинстве случаев являются многоядерными компьютерами с большим объемом памяти, чем у ПК в то время. И они могут запускать приложения Delphi! Некоторые типичные приложения (ну, это мой телефон):
9. Окно было TForm в Delphi 1
С первых дней Delphi TForm (как и другие классы TWinControl) инкапсулирует дескриптор Windows из user.dll, а операции формы вызывают Windows API и запускают системные сообщения. Delphi является визуальным (см. Ниже), но имеет базовую архитектуру ООП — форма приложения наследуется от базового класса TForm:
10. Окно по-прежнему является TForm (или фактически 2, VCL + FMX)
Сегодня форма по-прежнему лежит в основе приложений, будь то VCL (см. Самое начало определения базового класса ниже) или FireMonkey, и в этом случае формы отображаются на элемент пользовательского интерфейса Windows, macOS, iOS, Android или Linux:
11. Начинались видеоигры.
Индустрия видеоигр тоже была на заре (от Game Art HQ ):
12. Видео и онлайн-игры огромны.
Вот новая мобильная игра, написанная на Delphi из сообщения в блоге Embarcadero (обратите внимание, она находится в IDE)
13. Считаем до 26 в Delphi 1
Это код, который вы могли написать в 1995 году для подсчета чисел в Delphi и получившего простое приложение:
14. Счет до 26 сегодня не сильно отличается от Delphi.
Сегодня мы можем написать и скомпилировать один и тот же точный код как в VCL для Windows, так и в FireMonkey для настольных компьютеров и мобильных устройств. Но мы также можем воспользоваться преимуществами новых возможностей языка Delphi, чтобы писать, как показано ниже:
15. Данные были Paradox, DBase, Clipper, FoxPro.
Delphi получил это название благодаря своей способности взаимодействовать с базами данных (Oracle + Delphi). И у него есть мастер, чтобы упростить создание приложения для базы данных (мы возвращаем нечто подобное!)
16. Данные — это Oracle, SQL Server, Azure, AWS, REST API и т. Д.
Сегодня вы можете использовать FireDAC и многие другие библиотеки для доступа к данным в Delphi. Но данные больше не только в базах данных. Несколько дней назад я писал в блоге о получении данных API для отдыха через Delphi REST Debugger (см. Мой недавний пост в блоге ).
17. Это я в 1995 году (через несколько дней после запуска Delphi).
Я совершил очень короткую поездку в Боббио (это меньше часа езды, больше нельзя во время пандемии) — фотография Бенни Канту :
19. RAD была революцией
Delphi предлагала (и до сих пор предлагает) уникальную комбинацию быстрого визуального дизайна (как VB до него) и надежной структуры ООП, позволяющей запускать и писать компоненты в одной среде и без проблем. Вот реклама первых дней:
21. Книги были чем-то очень важным, так как нельзя было погуглить название класса или спросить в Stack Overflow. Вот некоторые из моих ранних книг по Delphi:
22. Книги по-прежнему существуют, печатные или электронные.
Рынок технических книг намного меньше и сильно отличается, но книги все еще печатаются (и многие в последнее время на Delphi). Это мой последний, еще не опубликованный в печати:
23. VCL была лучшей библиотекой для WinAPI.
Никакая другая библиотека классов того времени не была так хорошо интегрирована с Windows API. Но MFC и WinForms от Microsoft так и не приблизились к качеству и полноте VCL. Это иерархия библиотеки (но не для Delphi 1, для Delphi 7 намного позже):
24. VCL — лучшая библиотека для WinAPI, интеграции COM, WinRT и скоро Project Reunion.
Библиотека продолжает расширяться, на сегодняшний день включает в себя Windows API, COM и объекты оболочки, API платформы WinRT. И мы продолжаем добавлять новые компоненты и сопоставления с новыми API. VCL уже включает функции Microsoft Project Reunion, и в будущем будет еще больше. Вот стилизованные приложения VCL, очень легко взять существующие приложения и сделать их современными за долю времени, затраченного на переписывание:
25. Delphi было интересно использовать.
Развлечение для разработчиков, приятное и приятное. А в Delphi 1 было пасхальное яйцо с архитектором языка Delphi Андерсом Хейлсбергом :
26. Delphi — это весело использовать
Delphi по-прежнему интересен в использовании, имеет активное сообщество и ряд талантливых MVP. В последней версии есть пасхальное яйцо, посвященное 25-летию прошлого года :
В честь своего 26-летия компания Embarcadero предлагает скидку 26%!
26 изображений, рассказывающих историю Delphi на данный момент. Следите за обновлениями, скоро выйдет новая глава истории. И помоги нам отпраздновать.
А пока вы также можете воспользоваться отличным предложением и купить Delphi со скидкой 26%, чтобы отпраздновать юбилей !
InterBase verfügt über mehrere integrierte Funktionen, mit denen Entwickler SQL-Abfragen erstellen und optimieren können. In einigen Situationen müssen Sie möglicherweise Abfragen erweitern oder mit komplexeren Abfragen arbeiten, die Zeichenfolgen-, Datums- und Statistikfunktionen enthalten. InterBase kann dieses Problem lösen, indem Bibliotheken oder benutzerdefinierte Funktionen oder UDFs unterstützt werden. UDFs sind Programme zum Ausführen benutzerdefinierter Funktionen, die Erweiterungen des Servers sind und als Teil des Serverprozesses ausgeführt werden. Sie können über isql oder ein Programm in der Host-Sprache auf UDFs zugreifen . Sie können auch in gespeicherten Prozeduren und Triggerkörpern auf UDFs zugreifen. UDFs können in einer Datenbankanwendung überall dort verwendet werden, wo eine integrierte SQL-Funktion verwendet werden kann.
Hinzufügen von UDFs zu InterBase
Im Internet sind mehrere UDF-Bibliotheken für InterBase verfügbar, aber eine mit mehreren Funktionen, die Sie sich ansehen können, ist FreeADHocUDF .
InterBase enthält zwei UDF-Dateien in der Installation, eine mit dem Namen udflib.dll (Beispiel für eine Mitarbeiter-Datenbank im Video) und die Datei ib_udf .
In diesem einfachen dreistufigen Prozess können Sie auch Ihre eigene UDF erstellen:
Schreiben Sie die Funktion in eine beliebige Programmiersprache, mit der eine gemeinsam genutzte Bibliothek erstellt werden kann. In Java geschriebene Funktionen werden nicht unterstützt.
Kompilieren Sie die Funktion und verknüpfen Sie sie mit einer dynamisch verknüpften oder gemeinsam genutzten Bibliothek.
Verwenden ERKLäRT EXTERNAL FUNCTION jeden UDF an jede Datenbank zu erklären , in dem Sie benötigen , es zu benutzen.
Weitere Informationen zum Erstellen eigener UDFs finden Sie im docwiki .
Ähnlich wie Windows-Themen machen es VCL-Stile einfach, das Erscheinungsbild Ihrer VCL-Anwendungen radikal zu ändern und geben Ihrer Anwendung das gewisse Extra an Glanz und Professionalität. Dank der in 10.4 eingeführten VCL-Stile pro Steuerelement kann ein einzelnes Formular mehrere Stile nutzen, was Ihnen maximale Anpassungsmöglichkeiten und Kontrolle bietet. RAD Studio, Delphi und C++Builder werden mit einer Auswahl an VCL-Styles ausgeliefert, zusätzliche Premium-Styles sind in GetIt oder über DelphiStyles erhältlich. In dieser Webinar-Wiederholung lernen Sie zusammen mit Alexey Sharagin, dem kreativen Genie hinter DelphiStyles, alles, was Sie schon immer über die Arbeit mit VCL-Styles wissen wollten, einschließlich der Anpassung und Erstellung eigener Styles.
Eigenschaften von TStyleManager zur Anpassung von VCL-Styles für Formulare, Dialoge und Menüs
Besonderheiten der High-DPI-Unterstützung
Vorteile der Verwendung von VCL-Styles in der Anwendung
Details zur Unterstützung von Per-Control-Styles
Erstellen und Anpassen von Stilen mit dem Bitmap Style Designer Tool
Verwenden von Stilen von GetIt und DelphiStyles.com
In Dev-Insider ist ein interessanter Artikel erschienen der Datenbanken als Rückgrat von Enterprise-Applikationen beschreibt. Die relationale Datenbank ist nach wie vor das A und O von Applikationen aus dem Enterprise-Umfeld.
Es werden typische Vertreter relationaler Datenbanken aufgeführt und verglichen, welche häufig in unternehmerischen Applikationslandschaften eingesetzt werden.
InterBase wird unter anderem mit MS SQL Server, MySQL und anderen betrachtet.
Среди новостей начала этого года было сообщение об объединении компании Apilayer с Idera в рамках объявленной стратегии. Apilayer — компания, которая предоставляет неплохой набор микросервисов для поиска описаний и местоположения, получения данных по курсам валют, погоды и буржевых сводок, сервисы конвертирования и проверки допустимости форматов. Более подробно об этих микросервисах можно прочитать на странице компании https://apilayer.com/. Радует наличие бесплатного доступа к каждому из этих сервисов. Для подключения требуется создать учетную запись на сайте каждого нужного вам сервиса.
Сервисы достаточно просты, имеют подробную документацию с описанием функций и параметров вызова REST API.
Здесь мне хотелось бы показать, насколько просто и быстро можно подключить эти сервисы в ваши программы на Delphi или С++. Для примера возьмем сервис получения данных о погоде https://weatherstack.com/, ограниченные функции которого можно использовать бесплатно.
Я заранее создал учетную запись и получил персональный ключ доступа приложения к сервису. Это каждый вполне осилит самостоятельно.
Давайте создадим новый проект Delphi. Для этого примера я буду использовать фреймворк FMX, и приложение можно будет использовать на Android. Но сначала надо освоить API и подобрать нужные параметры. Это проще всего сделать в инструменте REST Debugger из поставки Delphi: он вызывается прямо из меню Tools в главном меню IDE. Указываем URL для точки входа API — https://api.weatherstack.com и метод GET, затем, на закладке Parameters, функцию current в поле Resource. Теперь надо задать параметры вызова, из которых access_key является обязательным. Локация/регион/город для получения данных о погоде указывается во втором параметре — query. Установить Content-Type для приема JSON, и осталось только нажать кнопку Send Request для получения результата. Если все верно, то результат будет таким:
Проект приложения будет состоять из одной формы, на которой помещены: верхняя панель с TLabel, TEdit для ввода локации и TSpeedButton (стиль refreshtoolbutton), а также TTabControl с двумя закладками: JSON и Data, на которые поместим TMemo и Tlistbox, соответственно.
Чтобы организовать доступ к REST API из приложения, достаточно в REST Debugger нажать кнопку CopyComponents — все необходимое будет скопировано в буфер обмена — и затем сделать Edit->Paste на форму приложения в IDE, для простоты — на главную. Появится пять компонент со свойствами, уже установленными в нужные и отлаженные параметры. Это RESTClient, RESTRequest, RESTResponse, RESTResponceDatasetAdapter и FDMemTable. Первые три отвечают за REST вызов и получение Response, а адаптер преобразует json-данные в dataset. Прямо во время проектирования можно вызвать Request.Execute компонента RESTRequest и проверить результат на экране.
Осталось связать полученные данные с компонентами для показа пользователю приложения во время выполнения. Это потребует минимум кода: обработка события нажатия кнопки и загрузка табличных данных из dataset в Listbox. Отображения json-результата в TMemo на первой закладке проще всего сделать через Visual LiveBindings и это не потребует написания ни строчки кода.
procedure TForm4.FillData;
//var
// ListBoxItem: TListBoxItem;
begin
ListBox1.BeginUpdate;
try
ListBox1.Items.Clear;
for var F in FDMemTable1.Fields do
begin
ListBox1.Items.AddPair(F.DisplayName, F.asString);
//ListBoxItem := TListBoxItem.Create(ListBox1);
//ListBoxItem.Text := Format('%s = %s', [F.DisplayName, F.AsString]);
//ListBox1.AddObject(ListBoxItem);
end;
finally
ListBox1.EndUpdate;
end;
end;
procedure TForm4.SpeedButton1Click(Sender: TObject);
begin
if Edit1.Text <> '' then begin
RESTRequest1.Params.ParameterByName('query').Value := Edit1.Text;
RESTRequest1.Execute;
FillData;
end;
end;
В процедуре SpeedButton1Click вызов производится только, если в Edit1 было введено имя местности. Это имя заносится в параметр query, выполняется обращение к сервису, а в процедуре FillData полученными в FDMemTable1 данными заполняется LIistBox1. Первоначально все было сделано по классике (закомментировано в примере кода), а потом выяснилось, что можно обойтись одно строчкой с вызовом другого метода.
Результаты запуска после компиляции можно увидеть на картинках ниже.
Другие микросервисы Apilayer вызываются аналогично, некоторые еще проще.
Иногда вашему приложению требуется пользовательский интерфейс, но как лучше всего сделать его для приложений Python? Введите DelphiVCL для Python. VCL — это зрелая среда графического пользовательского интерфейса Windows с огромной библиотекой включенных визуальных компонентов и надежной коллекцией сторонних компонентов. Это лучшая среда для собственных приложений Windows, но как использовать ее с Python? Благодаря пакету Python DelphiVCL VCL представляет собой первоклассный пакет для создания собственных графических интерфейсов Windows с помощью Python. Нужны дополнительные инструменты для дизайна? Вы можете создать весь графический интерфейс в Delphi, а затем написать всю логику на Python. DelphiVCL — это самая быстрая, наиболее продуманная и полная библиотека графического интерфейса для разработки собственного графического интерфейса Windows Python.
Às vezes, seu aplicativo precisa de uma interface de usuário, mas qual é a melhor maneira de fazer uma para aplicativos Python? Digite DelphiVCL para Python. A VCL é uma estrutura GUI nativa do Windows madura com uma enorme biblioteca de componentes visuais incluídos e uma coleção robusta de componentes de terceiros. É a estrutura principal para aplicativos nativos do Windows, mas como usá-la com Python? Graças ao pacote DelphiVCL Python, o VCL é um pacote de primeira classe para construir GUIs nativos do Windows com Python. Precisa de mais ferramentas de design? Você pode construir a GUI inteira em Delphi e então escrever toda a lógica em Python. DelphiVCL é a biblioteca GUI mais rápida, madura e completa para o desenvolvimento de GUI Windows Python nativa.
A veces, su aplicación necesita una interfaz de usuario, pero ¿cuál es la mejor manera de crear una para las aplicaciones de Python? Ingrese DelphiVCL para Python. La VCL es un marco GUI nativo de Windows maduro con una enorme biblioteca de componentes visuales incluidos y una colección sólida de componentes de terceros. Es el marco principal para aplicaciones nativas de Windows, pero ¿cómo usarlo con Python? Gracias al paquete DelphiVCL Python, la VCL es un paquete de primera clase para construir GUI nativas de Windows con Python. ¿Necesitas más herramientas de diseño? Puede construir la GUI completa en Delphi y luego escribir toda la lógica en Python. DelphiVCL es la biblioteca de GUI más rápida, madura y completa para el desarrollo de GUI nativo de Windows Python.
Manchmal braucht Ihre Anwendung eine Benutzeroberfläche, aber wie kann man diese am besten für Python-Anwendungen erstellen? Hier kommt DelphiVCL für Python ins Spiel. Die VCL ist ein ausgereiftes natives Windows-GUI-Framework mit einer riesigen Bibliothek mitgelieferter visueller Komponenten und einer robusten Sammlung von Drittanbieterkomponenten. Es ist das führende Framework für native Windows-Anwendungen, aber wie kann man es mit Python verwenden? Dank des DelphiVCL Python-Pakets ist die VCL ein erstklassiges Paket für die Erstellung nativer Windows-GUIs mit Python. Brauchen Sie mehr Design-Werkzeuge? Sie können die gesamte GUI in Delphi erstellen und dann die gesamte Logik in Python schreiben. DelphiVCL ist die schnellste, ausgereifteste und vollständigste GUI-Bibliothek für native Windows-Python-GUI-Entwicklung.
Wie verhalten sich Delphi, WPF .NET Framework und Electron im Vergleich zueinander und wie lässt sich ein objektiver Vergleich am besten durchführen? Embarcadero gab ein Whitepaper in Auftrag , um die Unterschiede zwischen Delphi, WPF .NET Framework und Electron beim Erstellen von Windows-Desktopanwendungen zu untersuchen. Die Benchmark-Anwendung – ein Windows 10 Calculator-Klon – wurde in jedem Framework von drei freiwilligen Mitarbeitern von Delphi Most Valuable Professionals (MVPs), einem freiberuflichen WPF-Experten und einem freiberuflichen Electron-Entwickler neu erstellt. In diesem Blog-Beitrag werden wir die Langzeit-Machbarkeitsmetrik untersuchen, die Teil des im Whitepaper verwendeten Funktionsvergleichs ist.
Langfristige Machbarkeit
Wenn Unternehmen Delphi als Entwicklungsframework auswählen, investieren sie in ein proprietäres Framework (das den Quellcode der Laufzeitbibliothek enthält) mit Vorabkosten und einer optionalen jährlichen Aktualisierungsgebühr. Für diese Kosten erhalten sie ein stabiles, abwärtskompatibles und wachsendes Framework und können sicher sein, dass heute entwickelte Anwendungen in Zukunft unterstützt und gewartet werden können.
Windows Presentation Foundation mit .NET Framework bietet Unternehmen ein wirtschaftliches Framework mit der vollständigen Unterstützung von Microsoft, enthält jedoch alle Herausforderungen, die sich aus den Entscheidungen von Microsoft ergeben. WPF hat eine kürzere Geschichte als Delphi, wurde jedoch 2018 als Open-Source-Version veröffentlicht, was einigen Versionen trotz der Bindung an das proprietäre .NET Framework für die meisten Windows-Entwicklungen einen guten langfristigen Ausblick geben könnte. .NET Framework 4.8 war laut Microsoft die letzte Version am 18. April 2019.
Electron ist eine kostenlose Open-Source-Plattform, die Unternehmen die Möglichkeit bietet, Anwendungen von jedem wichtigen Betriebssystem aus zu entwickeln. Die Zukunft von Electron ist jedoch ungewiss. Das Electron-Projekt wird von GitHub betrieben, einer Tochtergesellschaft von Microsoft. Es ist das neueste der drei Frameworks und befindet sich noch in der Flitterwochenphase. Es fehlt eine native IDE, die Unternehmen die Wahl lässt, aber auch einige Annehmlichkeiten wie integrierte Kompilierung und Testbibliotheken entfernt. Unternehmen, die interne Tools entwickeln, hätten es mit Electron schwerer als die anderen Frameworks.
Werfen wir einen Blick auf jedes Framework.
Delphi
Delphi wächst, reift und expandiert seit 1995. Die Entwicklung behält die Abwärtskompatibilität in dem Maße bei, dass eine Anwendung von 1995 mit minimalen Änderungen auf die aktuelle Delphi-Version portiert werden kann. Umfassende Dokumentation unterstützt die Wartung, und ein vollständiges Support-Team steht für Upgrade-, Migrations- oder Fehlerbehebungshilfe zur Verfügung. Zum Zeitpunkt dieses Schreibens ist die neueste Version von Delphi in RAD Studio 10.4.1 Sydney verfügbar, das am 2. September 2020 veröffentlicht wurde. Möchten Sie mehr erfahren? Lesen Sie die Versionshinweise vieler Delphi-Versionen .
Für einige Zusammenhänge auf der Programmiersprachen-Zeitachse erschien C ++ 1983, Python 1991, Java 1995, PHP 1995, JavaScript 1995 und Delphi 1995. 1995 war ein Geburtsjahr für viele dieser Programmiersprachen, wie Sie sehen können. Die Delphi Anniversary-Website enthält eine Delphi Release Timeline von 1995 bis heute. Hier ist ein Auszug aus der Zeitleiste der Veröffentlichungen der letzten 25 Jahre.
DELPHI 1 – 14. FEBRUAR 1995
16-Bit-Windows 3.1-Unterstützung, Visual Two-Way-Tools, Komponenten / VCL, Datenbankunterstützung über BDE und SQL Links, Datenbankdaten werden zur Entwurfszeit live übertragen
DELPHI 2 (1996)
32-Bit-Unterstützung für Windows 95, Datenbankraster, OLE-Automatisierung, Vererbung visueller Formulare, lange Zeichenfolgen, Enthaltenes Delphi 1 für 16-Bit
Attribute, Erweiterte RTTI, Direct2D-Zeichenfläche, Windows 7-Unterstützung, Touch / Gesten, Quellcode-Formatierer, Thread-spezifische Haltepunkte, Debugger-Visualisierer, IOUtils-Einheit für Dateien, Pfade und Verzeichnisse, Quellcode-Audits und -Metriken, Hintergrundkompilierung, Quellcode für MIDAS. DLL
DELPHI XE (2010)
Bibliothek für reguläre Ausdrücke, Subversion-Integration, dbExpress-Filter, Authentifizierung, ProxyGeneration, JavaScript Framework, REST-Unterstützung, Indy WebBroker, Cloud – Amazon EC2, Microsoft Azure, Build-Gruppen, benannte Threads im Debugger, Befehlszeilenprüfungen, Metriken und Dokumentationsgenerierung
DELPHI XE2 (2011)
64-Bit-Windows, Mac OSX, FireMonkey, Live-Bindungen – FireMonkey und VCL, VCL-Stile, Einheitenbereichsnamen, Plattformassistent, DataSnap – Konnektoren für mobile Geräte, Cloud-API, HTTPS-Unterstützung, TCP-Überwachung, dbExpress-Unterstützung für ODBC-Treiber, Bereitstellung Manager
DELPHI XE3 (2012)
Metropolis UI für Windows 8, 7, Vista und XP, FM-Aktionen, Berührungen / Gesten, Layouts und Anker, FM-Unterstützung für Bitmap-Stile, TMaterial-Quelle für FM-3D-Komponenten, FM-Audio / Video, VCL / FM-Unterstützung für Sensorgeräte, FM Positionssensorkomponente, Unterstützung für virtuelle Tastaturen, DirectX 10-Unterstützung
DELPHI XE4 (APRIL 2013)
iOS-Unterstützung – Gerät, Simulator, iOS App Store, iOS-Unterstützung für Standard- und Retina-Displays, iOS-Stile, Retina-Stile, virtuelle Tastaturen, Mobile Form Designer, TWebBrowser-Komponente, iOS ARC (automatische Referenzzählung) für alle TObject-Klassen, Platform Services, Benachrichtigungs-, Standort-, Bewegungs- und Orientierungssensorkomponenten, TListView-Komponente, Mac OSX-Vollbildunterstützung, Bereitstellungsmanager für iOS-Geräte, universelle FireDAC-Datenzugriffskomponenten, InterBase – IBLite und IBToGo
DELPHI XE5 (SEPTEMBER 2013)
Android-Unterstützung – Geräte und Emulator. Betriebssystemversionen: Jelly Bean, Ice Cream Sandwich und Gingerbread, Benachrichtigungskomponente, Unterstützung im iOS 7-Stil, Konfigurierbarer Formular-Designer für mobile Geräte, Bereitstellungsmanager für Android-Geräte, REST Services-Clientzugriffs- und Authentifizierungskomponenten, Android-Unterstützung für alle XE4 FM und Datenbankfunktionen, die oben aufgeführt sind
DELPHI XE6 (APRIL 2014)
Windows 7- und 8.1-Stile, Zugriff auf Cloud-basierte RESTful-WEB-Dienste, FireDAC-kompatibel mit mehr Datenbanken, vollständig integrierte InterBase-Unterstützung
DELPHI XE7 (SEPTEMBER 2014)
FireMonkey-Anwendungen für mehrere Geräte unterstützen sowohl Desktop- als auch mobile Plattformen, eingebettete IBLite-Datenbank für Windows, Mac, Android und iOS, Multi-Display-Unterstützung, Multi-Touch-Unterstützung und Gestenänderungen, Immersive-Modus im Vollbildmodus für Android, FireMonkey unterstützt den Pull- Funktion zum Aktualisieren für TListView unter iOS und Android, FireMonkey-Funktion zum Speichern des Status
DELPHI XE8 (APRIL 2015)
GetIt Package Manager, FireDAC-Verbesserungen, neue Embarcadero-Community-Symbolleiste, native Präsentation von TListView, TSwitch, TMemo, TCalendar, TMultiView und TEdit unter iOS, interaktive Karten, neue Optionen für die Medienbibliothek, InputQuery unterstützt jetzt das Maskieren von Eingabefeldern
DELPHI 10 ‚SEATTLE‘ (AUGUST 2015)
Android Background Services-Unterstützung, FireDAC-Unterstützung für die NoSQL MongoDB-Datenbank, FireMonkey steuert die zOrder-Unterstützung für Windows, neue TBeaconDevice-Klasse zum Verwandeln eines Geräts auf einer der unterstützten Plattformen in ein „Beacon“, StyleViewer für Windows 10 Style in Bitmap Style Designer, High -DPI Awareness und 4K-Monitore unterstützen, Windows 10-Stile, Unterstützung für Android-Dienste in der IDE, Unterstützung für das Aufrufen von WinRT-APIs
DELPHI 10.1 ‚BERLIN‘ (APRIL 2016)
Android 6.0-Unterstützung, Windows Desktop Bridge-Unterstützung, Adressbuch für iOS und Android, neuer ListView Item Designer, neues CalendarView-Steuerelement, QuickEdits für VCL, Unterstützung für hohe DPI-Werte unter Windows, Hinweis auf Änderungen der Eigenschaften, EMS Apache Server-Unterstützung, GetIt-basiertes Webinstallationsprogramm
DELPHI 10.2 ‚TOKYO‘ (MÄRZ 2017)
64-Bit-Linux-Unterstützung für Delphi, FireDAC bietet Linux-Unterstützung für alle Linux-fähigen DBMS-, MariaDB-Unterstützung (v5.5), MySQL-Unterstützung für v5.7 und Firebird-Unterstützung für Direct I / O, QuickEdits für FMX und neue VCL-Steuerelemente für Windows 10, aktualisiertes IDE-Look & Feel (dunkles Thema), RAD Server-Bereitstellungslizenz enthalten
DELPHI 10.3 ‚RIO‘ (NOVEMBER 2018)
C ++ 17 für Win32, neue Delphi-Sprachfunktionen, FireMonkey Android zOrder, native Steuerelemente und API-Level 26, Verbesserungen für Windows 10, VCL und HighDPI, umfassende Modernisierung der IDE-Benutzeroberfläche, Erweiterung der RAD Server-Architektur, Qualitäts- und Leistungsverbesserungen
DELPHI 10.3.1 ‚RIO‘ (FEBRUAR 2019)
Erweiterte Unterstützung für Geräte der iOS 12- und iPhone X-Serie. Neugestaltung der RAD Server Console-Benutzeroberfläche und Migration zum Ext JS-Framework (verfügbar über GetIt). Verbesserte FireDAC-Unterstützung für Firebird 3.0.4 und Firebird Embedded. Verbesserungen der HTTP- und SOAP-Clientbibliothek unter Windows. Zwei neue IDE-Produktivitätswerkzeuge: Lesezeichen und Navigator. 15 neue benutzerdefinierte VCL Windows- und Multi-Device FireMonkey-Stile.
DELPHI 10.3.2 ‚RIO‘ (JULI 2019)
Delphi macOS 64-Bit, C ++ 17 für Windows 64-Bit, C ++ LSP Code Insight-Verbesserungen, RAD Server-Assistenten und Bereitstellungsverbesserungen, erweiterte Firebase-Android-Unterstützung, Delphi Linux Client-Anwendungsunterstützung
DELPHI 10.3.3 ‚RIO‘ (NOVEMBER 2019)
Delphi Android 64-Bit-Unterstützung, iOS 13 und macOS Catalina (Delphi) Unterstützung, RAD Server Docker-Bereitstellung, Enterprise Connectors in Enterprise & Architect Edition
DELPHI 10.4 ‚SYDNEY‘ (MAI 2020)
Deutlich verbesserte native Windows-Hochleistungsunterstützung, höhere Produktivität durch blitzschnelle Code-Vervollständigung, schnelleren Code mit verwalteten Datensätzen und verbesserte parallele Aufgaben auf modernen Multi-Core-CPUs, über 1000 Qualitäts- und Leistungsverbesserungen und vieles mehr.
DELPHI 10.4.1 ‚SYDNEY‘ (SEPTEMBER 2020)
RAD Studio 10.4.1 konzentriert sich stark auf Qualitätsverbesserungen bei IDE, Delphi Code Insight (LSP), paralleler Bibliothek, SOAP und XML, C ++ – Toolchain, FireMonkey, VCL, Delphi Compiler und iOS-Bereitstellung.
WPF .NET Framework
WPF wurde 2006 veröffentlicht und zusammen mit dem .NET Framework entwickelt. Es wurde 2018 von Microsoft als Open-Source-Lösung bereitgestellt und enthält mehrere Roadmaps, die auf das Engagement und das Wachstum der Community in naher Zukunft hinweisen. Wesentliche .NET-Änderungen und die sich ändernden Designentscheidungen von Microsoft wirken sich auf die langfristige Machbarkeit von WPF aus. WPF .NET Framework 4.8 war laut Microsoft die endgültige Version von .NET Framework und wurde am 18. April 2019 veröffentlicht.
WPF wurde 2006 in .NET Framework 3.0 eingeführt. Gemäß einem Artikel auf der CodeProject-Website sind die WPF-Versionen und -Erweiterungen in dieser Tabelle aufgeführt:
WPF Version
Release (YYYY-MM)
.NET Version
Visual Studio Version
Major Features
3.0
2006-11
3.0
N/A
Initial Release. WPF development can be done with VS 2005 (released in Nov 2005) too with few additions.
3.5
2007-11
3.5
VS 2008
Changes and improvements in: Application model, data binding, controls, documents, annotations, and 3-D UI elements.
3.5 SP1
2008-08
3.5 SP1
N/A
Native splash screen support, New WebBrowser control, DirectX pixel shader support. Faster startup time and improved performance for Bitmap effects.
4.0
2010-04
4.0
VS 2010
New controls: Calendar, DataGrid, and DatePicker. Multi-Touch and Manipulation
4.5
2012-08
4.5
VS 2012
New Ribbon control New INotifyDataErrorInfo interface
4.5.1
2013-10
4.5.1
VS 2013
No Major Change
4.5.2
2014-05
4.5.2
N/A
No Major Change
4.6
2015-07
4.6
VS 2015
Transparent child window support HDPI and Touch improvements
.NET Framework 4.6.1 – Die Veröffentlichung von .NET Framework 4.6.1 wurde am 30. November 2015 angekündigt. Diese Version erfordert Windows 7 SP1 oder höher. Zu den neuen Funktionen und APIs gehören:
.NET Framework 4.6.2 – Die Vorschau von .NET Framework 4.6.2 wurde am 30. März 2016 angekündigt. Sie wurde am 2. August 2016 veröffentlicht. Diese Version erfordert Windows 7 SP1 oder höher.
.NET Framework 4.7 – Am 5. April 2017 gab Microsoft bekannt, dass .NET Framework 4.7 in Windows 10 Creators Update integriert wurde und ein eigenständiges Installationsprogramm für andere Windows-Versionen verspricht. An diesem Datum wurde ein Update für Visual Studio 2017 veröffentlicht, um Unterstützung für das Targeting von .NET Framework 4.7 hinzuzufügen. Das versprochene eigenständige Installationsprogramm für Windows 7 und höher wurde am 2. Mai 2017 veröffentlicht, hatte jedoch Voraussetzungen, die nicht im Paket enthalten waren.
.NET Framework 4.7.1 – .NET Framework 4.7.1 wurde am 17. Oktober 2017 veröffentlicht. Neben den Korrekturen und neuen Funktionen wird ein Problem mit der Abhängigkeit des d3dcompilers behoben. Es bietet außerdem eine sofortige Kompatibilität mit .NET Standard 2.0.
.NET Framework 4.7.2 – .NET Framework 4.7.2 wurde am 30. April 2018 veröffentlicht. Zu den Änderungen gehören Verbesserungen an ASP.NET, BCL, CLR, ClickOnce, Netzwerk, SQL, WCF, Windows Forms, Workflow und WPF. Diese Version ist in Server 2019 enthalten.
.NET Framework 4.8 – .NET Framework 4.8 wurde am 18. April 2019 veröffentlicht. Es war die endgültige Version von .NET Framework. Alle zukünftigen Arbeiten flossen in die .NET Core-Plattform ein, die schließlich zu .NET 5 und höher wird. Diese Version enthielt JIT-Verbesserungen, die von .NET Core 2.1 portiert wurden, High DPI-Verbesserungen für WPF-Anwendungen, Verbesserungen der Barrierefreiheit, Leistungsaktualisierungen und Sicherheitsverbesserungen. Es unterstützt Windows 7, Server 2008 R2, Server 2012, 8.1, Server 2012 R2, 10, Server 2016 und Server 2019 und wird auch als Windows-Container-Image geliefert. Die neueste Version ist 4.8.0 Build 3928, veröffentlicht am 25. Juli 2019 mit einer Größe des Offline-Installationsprogramms von 111 MB und einem Datum der digitalen Signatur am 25. Juli 2019.
-WIKIPEDIA
Elektron
Electron wurde 2013 veröffentlicht, wird von GitHub aktiv entwickelt und gewartet und hat neue Technologien wie Apple Silicon (ca. November 2020) schnell unterstützt. Es fehlt die Historie und die stabile Langlebigkeit, die erforderlich sind, um zu bestimmen, ob die im Jahr 2020 gebauten Electron-Apps bis 2030 überleben werden. GitHub ist eine Tochtergesellschaft von Microsoft. Electron bietet eine kostenlose Alternative zu Delphi und WPF, Vertrautheit mit Front-End-Entwicklern und plattformübergreifende Funktionen auf Kosten des IP-Schutzes, der Standard-IDE-Tools und der Anwendungsleistung.
Laut der Electron Release-Timeline (https://www.electronjs.org/docs/tutorial/electron-timelines) sind hier die Releases.
Version
-beta.1
Stable
Chrome
Node
2.0.0
2018-02-21
2018-05-01
M61
v8.9
3.0.0
2018-06-21
2018-09-18
M66
v10.2
4.0.0
2018-10-11
2018-12-20
M69
v10.11
5.0.0
2019-01-22
2019-04-24
M73
v12.0
6.0.0
2019-05-01
2019-07-30
M76
v12.4
7.0.0
2019-08-01
2019-10-22
M78
v12.8
8.0.0
2019-10-24
2020-02-04
M80
v12.13
9.0.0
2020-02-06
2020-05-19
M83
v12.14
10.0.0
2020-05-21
2020-08-25
M85
v12.16
11.0.0
2020-08-27
2020-11-17
M87
v12.18
12.0.0
2020-11-19
2021-03-02
M89
v14.x
Delphi bietet die sichersten langfristigen Aussichten, die beste Sicherheit für geistiges Eigentum und die einfachste interne Anpassung auf Kosten eines einmaligen kommerziellen Lizenzkaufs. Die Eintrittsbarriere von WPF ist geringer und bietet bessere Eingabehilfen, unterliegt jedoch den .NET-Überarbeitungen von Microsoft, ist schwieriger anzupassen und kann problemlos dekompiliert werden. Electron ist absolut kostenlos und kann auf jeder der drei großen Desktop-Plattformen entwickelt werden. Diese Flexibilität wird jedoch durch seine ungewissen langfristigen Aussichten und durch die Unterstützung von Unternehmenssponsoring und Community-Unterstützung für die weitere Entwicklung bezahlt.
Entdecken Sie alle Metriken im Whitepaper „Ermitteln des besten Entwickler-Frameworks durch Benchmarking“:
¿Cómo funcionan Delphi, WPF .NET Framework y Electron en comparación entre sí, y cuál es la mejor manera de hacer una comparación objetiva? Embarcadero encargó un documento técnico para investigar las diferencias entre Delphi, WPF .NET Framework y Electron para crear aplicaciones de escritorio de Windows. La aplicación de referencia, un clon de la Calculadora de Windows 10, fue recreada en cada marco por tres voluntarios de Delphi Most Valuable Professionals (MVP), un desarrollador experto independiente de WPF y un desarrollador experto independiente Electron. En esta publicación de blog, vamos a explorar la métrica de viabilidad a largo plazo, que es parte de la comparación de funcionalidad utilizada en el documento técnico.
Viabilidad a largo plazo
Cuando las empresas eligen Delphi como su marco de desarrollo, están invirtiendo en un marco propietario (que incluye el código fuente de la biblioteca en tiempo de ejecución) con costos iniciales y una tarifa de actualización anual opcional. Por este costo, obtienen un marco estable, compatible con versiones anteriores y en crecimiento, y pueden estar seguros de que las aplicaciones desarrolladas hoy serán compatibles y se mantendrán en el futuro.
Windows Presentation Foundation con .NET Framework ofrece a las empresas un marco económico con el respaldo total de Microsoft, pero incluye todos los desafíos que inducen las elecciones de Microsoft. WPF tiene una historia más corta que Delphi, pero fue de código abierto en 2018, lo que podría darle a alguna versión una perspectiva brillante a largo plazo a pesar de sus vínculos con el .NET Framework propietario para la mayoría de los desarrollos de Windows. .NET Framework 4.8 fue la última versión el 18 de abril de 2019 según Microsoft.
Electron es una plataforma gratuita de código abierto que ofrece a las empresas la oportunidad de desarrollar aplicaciones desde cualquier sistema operativo importante. Sin embargo, el futuro de Electron es incierto. El proyecto Electron está dirigido por GitHub, que ahora es una subsidiaria de Microsoft. Es el más nuevo de los tres marcos y aún está en su fase de luna de miel. Carece de un IDE nativo, lo que brinda a las empresas una opción, pero también elimina algunas comodidades como la compilación integrada y las bibliotecas de prueba incluidas. Las empresas que desarrollan herramientas internas tendrían más dificultades con Electron que con los otros marcos.
Echemos un vistazo a cada marco.
Delphi
Delphi ha estado creciendo, madurando y expandiéndose desde 1995. Su desarrollo mantiene la compatibilidad con versiones anteriores en la medida en que una aplicación de 1995 se puede migrar a la versión actual de Delphi con cambios mínimos. La documentación completa ayuda al mantenimiento, y un equipo de soporte completo está disponible para ayudar con la actualización, la migración o la resolución de problemas. En el momento de escribir este artículo, la última versión de Delphi está disponible en RAD Studio 10.4.1 Sydney, que se lanzó el 2 de septiembre de 2020. ¿Desea obtener más información? Consulte las notas de la versión de muchas versiones de Delphi .
Para algo de contexto en la línea de tiempo del lenguaje de programación, C ++ salió en 1983, Python salió en 1991, Java salió en 1995, PHP salió en 1995, JavaScript salió en 1995 y Delphi salió en 1995. 1995 fue un año de nacimiento para muchos de estos lenguajes de programación, como puede ver. El sitio web del aniversario de Delphi contiene una línea de tiempo de lanzamiento de Delphi desde 1995 hasta el presente. A continuación, se incluye un extracto de la cronología de los lanzamientos de los últimos 25 años.
DELPHI 1 – 14 DE FEBRERO DE 1995
Compatibilidad con Windows 3.1 de 16 bits, herramientas visuales bidireccionales, componentes / VCL, compatibilidad con bases de datos a través de enlaces BDE y SQL, datos de bases de datos en vivo en tiempo de diseño
DELPHI 2 (1996)
Compatibilidad con Windows 95 de 32 bits, cuadrícula de base de datos, automatización OLE, herencia de formas visuales, cadenas largas, Delphi 1 incluido para 16 bits
DELPHI 3 (1997)
Interfaces (basadas en COM), Code Insight, Plantillas de componentes, Depuración de DLL, WebBroker, ActiveForms, Paquetes de componentes, Arquitectura de varios niveles MIDAS
DELPHI 4 (1998)
Acoplamiento, anclajes y restricciones, sobrecarga de métodos, matrices dinámicas, compatibilidad con Windows 98
DELPHI 5 (1999)
Diseños de escritorio, marcos, compatibilidad con XML, DBGo para ADO, traducciones de idiomas
DELPHI 6 (2001)
Ventana de estructura, servicios web SOAP, dbExpress, BizSnap, WebSnap, DataSnap
DELPHI 7 (2002)
Desarrollo de aplicaciones web, temas de Windows XP
DELPHI 8 (2003)
Soporte .NET
DELPHI 2005 (2004)
Espacios de nombres de unidades múltiples, Insight de errores, Ficha Historial, for..in, Función en línea, IDE habilitado para temas, Refactorizaciones, Comodín en declaración de usos, Explorador de datos, Pruebas unitarias integradas
DELPHI 2006 (2005)
Sobrecarga del operador, métodos y propiedades estáticos, pautas del diseñador, vista del posicionador de formularios, plantillas de código en vivo, finalización de bloques, números de línea, barras de cambio, edición sincronizada, plegado de código y navegación de métodos, sugerencias de herramientas de depuración, paleta de herramientas de búsqueda, administrador de memoria FastMM , Soporte para MySQL, Soporte Unicode en dbExpress, TTrayIcon, TFlowPanel, TGridPanel
DELPHI 2007 (2006)
MS Build, Build Events, Build Configurations, Windows Vista support – glassing, tematización, dbExpress 4 – agrupación de conexiones, controladores delegados, ventanas de visor de CPU, mejoras de FastCode, compatibilidad con IntraWeb / AJAX, página de bienvenida, Sim-ship de inglés, francés, alemán , Japonés
DELPHI 2009 (2008)
Unicode, genéricos, métodos anónimos, controles de cinta, DataSnap, configuraciones de compilación, explorador de clases, ventana del editor de biblioteca de tipos, compatibilidad con PNG
DELPHI 2010 (2009)
Atributos, RTTI mejorado, lienzo Direct2D, compatibilidad con Windows 7, toque / gestos, formateador de código fuente, puntos de interrupción específicos de subprocesos, visualizadores de depurador, unidad IOUtils para archivos, rutas y directorios, auditorías y métricas de código fuente, compilación en segundo plano, código fuente para MIDAS. DLL
DELPHI XE (2010)
Biblioteca de expresiones regulares, integración de Subversion, dbExpress –Filtros, autenticación, generación de proxy, marco de JavaScript, compatibilidad con REST, Indy WebBroker, Cloud – Amazon EC2, Microsoft Azure, grupos de compilación, subprocesos con nombre en el depurador, auditorías de línea de comandos, generación de métricas y documentación
DELPHI XE2 (2011)
Windows de 64 bits, Mac OSX, FireMonkey, Live Bindings: FireMonkey y VCL, estilos de VCL, nombres de alcance de unidad, asistente de plataforma, DataSnap: conectores para dispositivos móviles, API en la nube, compatibilidad con HTTPS, supervisión de TCP, compatibilidad con dbExpress para controladores ODBC, implementación Gerente
DELPHI XE3 (2012)
Interfaz de usuario de Metropolis para Windows 8, 7, Vista y XP, acciones FM, toque / gestos, diseños y anclajes, soporte FM para estilos de mapa de bits, fuente de material TM para componentes FM 3D, audio / video FM, soporte VCL / FM para dispositivos sensores, FM Componente del sensor de ubicación, compatibilidad con teclado virtual, compatibilidad con DirectX 10
DELPHI XE4 (ABRIL DE 2013)
Compatibilidad con iOS: dispositivo, simulador, tienda de aplicaciones iOS, compatibilidad con iOS para pantallas estándar y retina, estilos iOS, estilos retina, teclados virtuales, diseñador de formularios móviles, componente TWebBrowser, iOS ARC (recuento automático de referencias) para todas las clases TObject, servicios de plataforma, Notificaciones, componentes de sensor de ubicación, movimiento y orientación, componente TListView, compatibilidad con pantalla completa de Mac OSX, administrador de implementación para dispositivos iOS, componentes de acceso universal a datos FireDAC, InterBase – IBLite e IBToGo
DELPHI XE5 (SEPTIEMBRE DE 2013)
Compatibilidad con Android: dispositivos y emulador. Versiones del sistema operativo: Jelly Bean, Ice Cream Sandwich y Gingerbread, componente de notificación, compatibilidad con el estilo iOS 7, diseñador de formularios configurable para dispositivos móviles, administrador de implementación para dispositivos Android, componentes de autenticación y acceso de cliente de servicios REST, compatibilidad con Android para todos los XE4 FM y las características de la base de datos enumeradas anteriormente
DELPHI XE6 (ABRIL DE 2014)
Estilos Windows 7 y 8.1, Acceso a servicios WEB RESTful basados en la nube, FireDAC Compatible con más bases de datos, Soporte InterBase totalmente integrado
DELPHI XE7 (SEPTIEMBRE DE 2014)
Las aplicaciones de dispositivos múltiples de FireMonkey son compatibles con plataformas de escritorio y móviles, base de datos integrable IBLite para Windows, Mac, Android e iOS, compatibilidad con múltiples pantallas, compatibilidad con múltiples toques y cambios de gestos, modo inmersivo de pantalla completa para Android, FireMonkey admite la función Pull- Función de actualización para TListView en iOS y Android, función de estado de guardado de FireMonkey
DELPHI XE8 (ABRIL DE 2015)
GetIt Package Manager, mejoras de FireDAC, nueva barra de herramientas de la comunidad de Embarcadero, presentación nativa de TListView, TSwitch, TMemo, TCalendar, TMultiView y TEdit en iOS, mapas interactivos, nuevas opciones para la biblioteca de medios, InputQuery ahora admite campos de entrada de enmascaramiento
DELPHI 10 ‘SEATTLE’ (AGOSTO 2015)
Compatibilidad con los servicios en segundo plano de Android, compatibilidad con FireDAC para la base de datos NoSQL MongoDB, FireMonkey controla la compatibilidad con zOrder para Windows, nueva clase TBeaconDevice para convertir un dispositivo en una de las plataformas compatibles en una “baliza”, StyleViewer para Windows 10 Style in Bitmap Style Designer, High -Conciencia de DPI y soporte para monitores 4K, estilos de Windows 10, soporte para servicios de Android en el IDE, soporte para llamar a las API de WinRT
DELPHI 10.1 ‘BERLÍN’ (ABRIL DE 2016)
Compatibilidad con Android 6.0, compatibilidad con Windows Desktop Bridge, libreta de direcciones para iOS y Android, nuevo diseñador de elementos ListView, nuevo control CalendarView, QuickEdits para VCL, compatibilidad con alto DPI en Windows, cambios en las propiedades de sugerencias, compatibilidad con EMS Apache Server, instalador web basado en GetIt
DELPHI 10.2 ‘TOKIO’ (MARZO DE 2017)
Compatibilidad con Linux de 64 bits para Delphi, FireDAC proporciona compatibilidad con Linux para todos los DBMS compatibles con Linux, compatibilidad con MariaDB (v5.5), compatibilidad con MySQL para v5.7 y compatibilidad con Firebird para E / S directa, QuickEdits para FMX, nuevos controles VCL para Windows 10, IDE Look & Feel actualizado (tema oscuro), licencia de implementación de servidor RAD incluida
DELPHI 10.3 ‘RIO’ (NOVIEMBRE 2018)
C ++ 17 para Win32, nuevas funciones del lenguaje Delphi, FireMonkey Android zOrder, controles nativos y API nivel 26, mejoras en Windows 10, VCL y HighDPI, amplia modernización de la interfaz de usuario de IDE, extensión de la arquitectura del servidor RAD, mejoras de calidad y rendimiento
DELPHI 10.3.1 ‘RIO’ (FEBRERO 2019)
Soporte ampliado para dispositivos de la serie iOS 12 y iPhone X. Rediseño de la interfaz de usuario de la consola del servidor RAD y migración al marco Ext JS (disponible a través de GetIt). Compatibilidad mejorada con FireDAC para Firebird 3.0.4 y Firebird integrado. Mejoras en la biblioteca cliente HTTP y SOAP en Windows. Dos nuevas herramientas de productividad IDE: marcadores y navegador. 15 nuevos estilos VCL personalizados de Windows y FireMonkey multidispositivo.
DELPHI 10.3.2 ‘RIO’ (JULIO 2019)
Delphi macOS de 64 bits, C ++ 17 para Windows de 64 bits, mejoras de C ++ LSP Code Insight, asistentes de servidor RAD y mejoras de implementación, compatibilidad mejorada con Firebase Android, compatibilidad con aplicaciones de cliente Delphi Linux
DELPHI 10.3.3 ‘RIO’ (NOVIEMBRE 2019)
Compatibilidad con Delphi Android de 64 bits, compatibilidad con iOS 13 y macOS Catalina (Delphi), implementación de Docker de servidor RAD, conectores empresariales en Enterprise & Architect Edition
DELPHI 10.4 ‘SYDNEY’ (MAYO DE 2020)
Compatibilidad con Windows nativa de alto rendimiento significativamente mejorada, mayor productividad con finalización de código ultrarrápida, código más rápido con registros administrados y tareas paralelas mejoradas en CPU modernas de múltiples núcleos, más de 1000 mejoras de calidad y rendimiento, y mucho más.
DELPHI 10.4.1 ‘SYDNEY’ (SEPTIEMBRE 2020)
RAD Studio 10.4.1 tiene un fuerte enfoque en las mejoras de calidad para IDE, Delphi Code Insight (LSP), Parallel Library, SOAP & XML, C ++ Toolchain, FireMonkey, VCL, Delphi Compiler e iOS Deployment.
WPF .NET Framework
Lanzado en 2006, WPF se ha desarrollado junto con .NET framework. Fue de código abierto por Microsoft en 2018 y ha proporcionado varias hojas de ruta que indican el compromiso y el crecimiento de la comunidad en el futuro cercano. Los cambios significativos en .NET y las decisiones de diseño cambiantes de Microsoft afectan la viabilidad a largo plazo de WPF. WPF .NET Framework 4.8 fue la versión final de .NET Framework según Microsoft y se lanzó el 18 de abril de 2019.
WPF se introdujo en .NET Framework 3.0 en 2006. Según un artículo del sitio web CodeProject, las versiones y mejoras de WPF se enumeran en esta tabla:
WPF Version
Release (YYYY-MM)
.NET Version
Visual Studio Version
Major Features
3.0
2006-11
3.0
N/A
Initial Release. WPF development can be done with VS 2005 (released in Nov 2005) too with few additions.
3.5
2007-11
3.5
VS 2008
Changes and improvements in: Application model, data binding, controls, documents, annotations, and 3-D UI elements.
3.5 SP1
2008-08
3.5 SP1
N/A
Native splash screen support, New WebBrowser control, DirectX pixel shader support. Faster startup time and improved performance for Bitmap effects.
4.0
2010-04
4.0
VS 2010
New controls: Calendar, DataGrid, and DatePicker. Multi-Touch and Manipulation
4.5
2012-08
4.5
VS 2012
New Ribbon control New INotifyDataErrorInfo interface
4.5.1
2013-10
4.5.1
VS 2013
No Major Change
4.5.2
2014-05
4.5.2
N/A
No Major Change
4.6
2015-07
4.6
VS 2015
Transparent child window support HDPI and Touch improvements
.NET Framework 4.6.1: el lanzamiento de .NET Framework 4.6.1 se anunció el 30 de noviembre de 2015. Esta versión requiere Windows 7 SP1 o posterior. Las nuevas funciones y API incluyen:
.NET Framework 4.6.2: la versión preliminar de .NET Framework 4.6.2 se anunció el 30 de marzo de 2016. Se lanzó el 2 de agosto de 2016. Esta versión requiere Windows 7 SP1 o posterior.
.NET Framework 4.7: el 5 de abril de 2017, Microsoft anunció que .NET Framework 4.7 se integró en Windows 10 Creators Update, prometiendo un instalador independiente para otras versiones de Windows. En esta fecha, se lanzó una actualización para Visual Studio 2017 para agregar compatibilidad con la orientación de .NET Framework 4.7. El instalador independiente prometido para Windows 7 y versiones posteriores se lanzó el 2 de mayo de 2017, pero tenía requisitos previos no incluidos en el paquete.
.NET Framework 4.7.1 – .NET Framework 4.7.1 se lanzó el 17 de octubre de 2017. Entre las correcciones y nuevas características, corrige un problema de dependencia de d3dcompiler. También agrega compatibilidad con .NET Standard 2.0 listo para usar.
.NET Framework 4.7.2 – .NET Framework 4.7.2 se lanzó el 30 de abril de 2018. Entre los cambios se encuentran mejoras en ASP.NET, BCL, CLR, ClickOnce, Redes, SQL, WCF, Windows Forms, Workflow y WPF. Esta versión se incluye con Server 2019.
.NET Framework 4.8 – .NET Framework 4.8 se lanzó el 18 de abril de 2019. Era la versión final de .NET Framework, todo el trabajo futuro se dirigirá a la plataforma .NET Core que eventualmente se convertirá en .NET 5 en adelante. Esta versión incluyó mejoras de JIT portadas desde .NET Core 2.1, mejoras de alto DPI para aplicaciones WPF, mejoras de accesibilidad, actualizaciones de rendimiento y mejoras de seguridad. Admite Windows 7, Server 2008 R2, Server 2012, 8.1, Server 2012 R2, 10, Server 2016 y Server 2019 y también se envía como una imagen de contenedor de Windows. La versión más reciente es 4.8.0 Build 3928, lanzada el 25 de julio de 2019 con un tamaño de instalación sin conexión de 111 MB y una fecha de firma digital del 25 de julio de 2019.
-WIKIPEDIA
Electrón
Lanzado en 2013, Electron es desarrollado y mantenido activamente por GitHub y rápidamente ha brindado soporte para tecnologías emergentes como Apple Silicon (alrededor de noviembre de 2020). Carece de la historia y la longevidad estable necesarias para determinar si las aplicaciones de Electron creadas en 2020 sobrevivirán hasta 2030. GitHub es una subsidiaria de Microsoft. Electron ofrece una alternativa gratuita a Delphi y WPF, familiaridad para los desarrolladores front-end y capacidad multiplataforma a costa de la protección IP, las herramientas IDE estándar y el rendimiento de la aplicación.
De acuerdo con la línea de tiempo de lanzamiento de electrones (https://www.electronjs.org/docs/tutorial/electron-timelines) aquí están los lanzamientos.
.
Version
-beta.1
Stable
Chrome
Node
2.0.0
2018-02-21
2018-05-01
M61
v8.9
3.0.0
2018-06-21
2018-09-18
M66
v10.2
4.0.0
2018-10-11
2018-12-20
M69
v10.11
5.0.0
2019-01-22
2019-04-24
M73
v12.0
6.0.0
2019-05-01
2019-07-30
M76
v12.4
7.0.0
2019-08-01
2019-10-22
M78
v12.8
8.0.0
2019-10-24
2020-02-04
M80
v12.13
9.0.0
2020-02-06
2020-05-19
M83
v12.14
10.0.0
2020-05-21
2020-08-25
M85
v12.16
11.0.0
2020-08-27
2020-11-17
M87
v12.18
12.0.0
2020-11-19
2021-03-02
M89
v14.x
Delphi ofrece la perspectiva a largo plazo más segura, la mejor seguridad de propiedad intelectual y la personalización interna más sencilla al costo de una única compra de licencia comercial. La barrera de entrada de WPF es más baja y ofrece mejores opciones de accesibilidad, pero está sujeta a las revisiones de .NET de Microsoft, es más difícil de personalizar y se puede descompilar con facilidad. Electron es absolutamente gratuito y se puede desarrollar en cada una de las tres principales plataformas de escritorio, pero paga por esa flexibilidad a través de su perspectiva incierta a largo plazo y confiando en los patrocinios corporativos y el apoyo de la comunidad para un desarrollo adicional.
Explore todas las métricas en el documento técnico “Descubriendo el mejor marco para desarrolladores a través de la evaluación comparativa”:
Qual é o desempenho do Delphi, do WPF .NET Framework e do Electron em comparação entre si, e qual é a melhor maneira de fazer uma comparação objetiva? A Embarcadero encomendou um white paper para investigar as diferenças entre Delphi, WPF .NET Framework e Electron para a construção de aplicativos de desktop do Windows. O aplicativo de referência – um clone da Calculadora do Windows 10 – foi recriado em cada estrutura por três voluntários Delphi Most Valuable Professionals (MVPs), um desenvolvedor WPF freelance especialista e um desenvolvedor freelance Electron especialista. Nesta postagem do blog, vamos explorar a métrica de Viabilidade de Longo Prazo, que faz parte da comparação de funcionalidade usada no white paper.
Viabilidade de Longo Prazo
Quando as empresas escolhem o Delphi como sua estrutura de desenvolvimento, elas estão investindo em uma estrutura proprietária (que inclui o código-fonte da biblioteca em tempo de execução) com custos iniciais e uma taxa de atualização anual opcional. Por esse custo, eles ganham uma estrutura estável, compatível com as versões anteriores e crescente, e podem ter certeza de que os aplicativos desenvolvidos hoje terão suporte e manutenção no futuro.
O Windows Presentation Foundation com .NET Framework oferece às empresas uma estrutura econômica com o apoio total da Microsoft, mas inclui todos os desafios que as escolhas da Microsoft induzem. O WPF tem uma história mais curta do que o Delphi, mas foi liberado em 2018, o que pode dar a alguma versão dele uma perspectiva brilhante de longo prazo, apesar de seus laços com o .NET Framework proprietário para a maioria do desenvolvimento do Windows. .NET Framework 4.8 foi o último lançamento em 18 de abril de 2019, de acordo com a Microsoft.
Electron é uma plataforma de código aberto gratuita que oferece às empresas a oportunidade de desenvolver aplicativos a partir de qualquer sistema operacional importante. O futuro da Electron é incerto, entretanto. O projeto Electron é executado pelo GitHub, que agora é uma subsidiária da Microsoft. É o mais novo dos três frameworks e ainda está em fase de lua de mel. Ele carece de um IDE nativo, dando às empresas uma escolha, mas também removendo algumas conveniências como compilação integrada e bibliotecas de teste incluídas. As empresas que desenvolvem ferramentas internas teriam mais dificuldade com o Electron do que com outras estruturas.
Vamos dar uma olhada em cada estrutura.
Delphi
O Delphi tem crescido, amadurecido e se expandido desde 1995. Seu desenvolvimento mantém a compatibilidade com versões anteriores ao grau que um aplicativo de 1995 pode ser portado para a versão atual do Delphi com mudanças mínimas. A documentação abrangente ajuda na manutenção, e uma equipe de suporte completa está disponível para atualização, migração ou ajuda na solução de problemas. No momento em que este livro foi escrito, a versão mais recente do Delphi estava disponível no RAD Studio 10.4.1 Sydney, que foi lançado em 2 de setembro de 2020. Quer saber mais? Verifique as notas de lançamento de muitas versões do Delphi .
Para algum contexto na linha do tempo da linguagem de programação, C ++ foi lançado em 1983, Python em 1991, Java em 1995, PHP em 1995, JavaScript em 1995 e Delphi em 1995. 1995 foi um ano de nascimento para muitas dessas linguagens de programação, como você pode ver. O site de aniversário da Delphi contém um cronograma de lançamento do Delphi de 1995 até o presente. Aqui está um trecho da linha do tempo de lançamentos nos últimos 25 anos.
DELPHI 1 – 14 DE FEVEREIRO DE 1995
Suporte a Windows 3.1 de 16 bits, ferramentas visuais bidirecionais, componentes / VCL, suporte a banco de dados via BDE e links SQL, dados de banco de dados ao vivo em tempo de design
DELPHI 2 (1996)
Suporte para Windows 95 de 32 bits, grade de banco de dados, automação OLE, herança de formato visual, strings longas, Delphi 1 incluído para 16 bits
DELPHI 3 (1997)
Interfaces (baseadas em COM), Code Insight, Modelos de Componente, Depuração de DLL, WebBroker, ActiveForms, Pacotes de Componente, arquitetura MIDAS multicamadas
DELPHI 4 (1998)
Ancoragem, âncoras e restrições, sobrecarga de método, matrizes dinâmicas, suporte para Windows 98
DELPHI 5 (1999)
Layouts de desktop, frames, suporte a XML, DBGo para ADO, traduções de linguagem
DELPHI 6 (2001)
Janela de estrutura, SOAP Web Services, dbExpress, BizSnap, WebSnap, DataSnap
DELPHI 7 (2002)
Desenvolvimento de aplicativos da Web, temas do Windows XP
DELPHI 8 (2003)
Suporte .NET
DELPHI 2005 (2004)
Namespaces de várias unidades, Error Insight, guia History, for..in, Function inlining, Theme-enabled IDE, Refactorings, Wild-card in usa declaração, Data Explorer, Integrated Unit Testing
DELPHI 2006 (2005)
Sobrecarga do operador, métodos e propriedades estáticos, Diretrizes do Designer, Visualização do posicionador de formulário, Modelos de código ativo, Completação de bloco, números de linha, Barras de alteração, edição sincronizada, Dobramento de código e navegação de método, Dicas de ferramentas de depuração, Paleta de ferramentas pesquisáveis, gerenciador de memória FastMM , Suporte para MySQL, suporte Unicode em dbExpress, TTrayIcon, TFlowPanel, TGridPanel
DELPHI 2007 (2006)
MS Build, Build Events, Build Configurations, suporte para Windows Vista – glassing, theming, dbExpress 4 – pool de conexão, drivers delegados, janelas de visualização de CPU, aprimoramentos de FastCode, suporte IntraWeb / AJAX, página de boas-vindas, Sim-ship of English, French, German , Japonês
DELPHI 2009 (2008)
Unicode, genéricos, métodos anônimos, controles de faixa de opções, DataSnap, configurações de compilação, explorador de classes, janela do editor de biblioteca de tipos, suporte a PNG
DELPHI 2010 (2009)
Atributos, RTTI aprimorado, tela Direct2D, suporte para Windows 7, toque / gestos, formatador de código-fonte, pontos de interrupção específicos de thread, visualizadores de depurador, unidade IOUtils para arquivos, caminhos e diretórios, auditorias e métricas de código-fonte, compilação de plano de fundo, código-fonte para MIDAS. DLL
DELPHI XE (2010)
Regular Expression Library, Subversion Integration, dbExpress –Filters, Authentication, ProxyGeneration, JavaScript Framework, suporte REST, Indy WebBroker, Cloud – Amazon EC2, Microsoft Azure, Build Groups, Named Threads no Debugger, Auditorias de linha de comando, Metrics and Documentation Generation
DELPHI XE2 (2011)
Windows de 64 bits, Mac OSX, FireMonkey, Live Bindings – FireMonkey e VCL, VCL Styles, Unit Scope Names, Platform Assistant, DataSnap – Connectors for Mobile Devices, Cloud API, suporte HTTPS, monitoramento TCP, suporte dbExpress para drivers ODBC, implantação Gerente
DELPHI XE3 (2012)
Metropolis UI para Windows 8, 7, Vista e XP, ações FM, toque / gestos, layouts e âncoras, suporte FM para estilos de bitmap, fonte de material TM para componentes FM 3D, áudio / vídeo FM, suporte VCL / FM para dispositivos sensores, FM Componente do sensor de localização, suporte para teclado virtual, suporte para DirectX 10
DELPHI XE4 (ABRIL DE 2013)
Suporte iOS – dispositivo, simulador, loja de aplicativos iOS, suporte iOS para telas padrão e retina, estilos iOS, estilos retina, teclados virtuais, designer de formulário móvel, componente TWebBrowser, iOS ARC (contagem automática de referência) para todas as classes TObject, Platform Services, Notificações, componentes do sensor de localização, movimento e orientação, componente TListView, suporte para tela cheia Mac OSX, gerenciador de implantação para dispositivos iOS, componentes de acesso universal a dados FireDAC, InterBase – IBLite e IBToGo
DELPHI XE5 (SETEMBRO DE 2013)
Suporte para Android – dispositivos e emulador. Versões do sistema operacional: Jelly Bean, Ice Cream Sandwich e Gingerbread, componente de notificação, suporte ao estilo iOS 7, designer de formulário configurável para dispositivos móveis, gerenciador de implantação para dispositivos Android, acesso de cliente de serviços REST e componentes de autenticação, suporte Android para todos os XE4 FM e recursos de banco de dados listados acima
DELPHI XE6 (ABRIL 2014)
Estilos Windows 7 e 8.1, Acesso à nuvem RESTful WEB Services, FireDAC compatível com mais bancos de dados, Suporte InterBase totalmente integrado
DELPHI XE7 (SETEMBRO 2014)
Aplicativos FireMonkey para vários dispositivos suportam plataformas de desktop e móveis, banco de dados incorporável IBLite para Windows, Mac, Android e iOS, suporte para vários monitores, suporte para multitoque e alterações de gestos, modo imersivo de tela inteira para Android, FireMonkey oferece suporte para Pull- to-Refresh Feature para TListView no iOS e Android, FireMonkey Save State Feature
DELPHI XE8 (ABRIL 2015)
GetIt Package Manager, melhorias FireDAC, nova barra de ferramentas da comunidade Embarcadero, apresentação nativa de TListView, TSwitch, TMemo, TCalendar, TMultiView e TEdit no iOS, mapas interativos, novas opções para biblioteca de mídia, InputQuery agora suporta mascaramento de campos de entrada
DELPHI 10 ‘SEATTLE’ (AGOSTO 2015)
Suporte a Android Background Services, suporte FireDAC para o banco de dados NoSQL MongoDB, FireMonkey controla o suporte zOrder para Windows, Nova classe TBeaconDevice para transformar um dispositivo em uma das plataformas suportadas em um “beacon”, StyleViewer para Windows 10 Style in Bitmap Style Designer, High – Consciência de DPI e suporte a monitores de 4K, estilos do Windows 10, suporte para serviços Android no IDE, suporte para chamada de APIs WinRT
DELPHI 10.1 ‘BERLIM’ (ABRIL DE 2016)
Suporte para Android 6.0, suporte para Windows Desktop Bridge, Catálogo de endereços para iOS e Android, Novo ListView Item Designer, Novo controle CalendarView, QuickEdits para VCL, Suporte para alto DPI no Windows, Hint Property Changes, EMS Apache Server Support, instalador da Web baseado em GetIt
DELPHI 10.2 ‘TÓQUIO’ (MARÇO DE 2017)
Suporte Linux de 64 bits para Delphi, FireDAC fornece suporte Linux para todos os DBMS habilitados para Linux, suporte MariaDB (v5.5), suporte MySQL para v5.7 e suporte Firebird para Direct I / O, QuickEdits para FMX, Novos controles VCL para Windows 10, aparência e comportamento IDE atualizados (tema escuro), licença de implantação de servidor RAD incluída
DELPHI 10.3 ‘RIO’ (NOVEMBRO 2018)
C ++ 17 para Win32, novos recursos de linguagem Delphi, FireMonkey Android zOrder, controles nativos e API de nível 26, Windows 10, VCL e melhorias de HighDPI, modernização extensiva de interface de usuário IDE, extensão de arquitetura de servidor RAD, melhorias de qualidade e desempenho
DELPHI 10.3.1 ‘RIO’ (FEVEREIRO 2019)
Suporte expandido para dispositivos da série iOS 12 e iPhone X. RAD Server Console UI reprojeto e migração para a estrutura Ext JS (disponível via GetIt). Suporte FireDAC aprimorado para Firebird 3.0.4 e Firebird embutido. Aprimoramentos da biblioteca de cliente HTTP e SOAP no Windows. Duas novas ferramentas de produtividade IDE: Bookmarks e Navigator. 15 novos estilos personalizados de VCL Windows e FireMonkey de vários dispositivos.
DELPHI 10.3.2 ‘RIO’ (JULHO DE 2019)
Delphi macOS 64-bit, C ++ 17 para Windows 64-bit, C ++ LSP Code Insight Improvements, RAD Server Wizards e Deployment Improvements, Enhanced Firebase Android Support, Delphi Linux Client Application Support
DELPHI 10.3.3 ‘RIO’ (NOVEMBRO 2019)
Suporte Delphi para Android de 64 bits, suporte para iOS 13 e macOS Catalina (Delphi), RAD Server Docker Deployment, Enterprise Connectors in Enterprise & Architect Edition
DELPHI 10.4 ‘SYDNEY’ (MAIO 2020)
Suporte nativo do Windows de alto desempenho significativamente aprimorado, maior produtividade com conclusão de código incrivelmente rápida, código mais rápido com registros gerenciados e tarefas paralelas aprimoradas em CPUs modernas de vários núcleos, mais de 1000 melhorias de qualidade e desempenho e muito mais.
DELPHI 10.4.1 ‘SYDNEY’ (SETEMBRO 2020)
RAD Studio 10.4.1 tem um forte foco em melhorias de qualidade para IDE, Delphi Code Insight (LSP), Biblioteca Paralela, SOAP e XML, C ++ Toolchain, FireMonkey, VCL, Delphi Compiler e iOS Deployment.
WPF .NET Framework
Lançado em 2006, o WPF foi desenvolvido junto com a estrutura .NET. O código-fonte foi aberto pela Microsoft em 2018 e forneceu vários roteiros indicando o envolvimento e o crescimento da comunidade em um futuro próximo. Mudanças significativas no .NET e as decisões de mudança de design da Microsoft afetam a viabilidade de longo prazo do WPF. O WPF .NET Framework 4.8 era a versão final do .NET Framework de acordo com a Microsoft e foi lançado em 18 de abril de 2019.
O WPF foi introduzido no .NET Framework 3.0 em 2006. De acordo com um artigo no site CodeProject, as versões e aprimoramentos do WPF estão listados nesta tabela:
WPF Version
Release (YYYY-MM)
.NET Version
Visual Studio Version
Major Features
3.0
2006-11
3.0
N/A
Initial Release. WPF development can be done with VS 2005 (released in Nov 2005) too with few additions.
3.5
2007-11
3.5
VS 2008
Changes and improvements in: Application model, data binding, controls, documents, annotations, and 3-D UI elements.
3.5 SP1
2008-08
3.5 SP1
N/A
Native splash screen support, New WebBrowser control, DirectX pixel shader support. Faster startup time and improved performance for Bitmap effects.
4.0
2010-04
4.0
VS 2010
New controls: Calendar, DataGrid, and DatePicker. Multi-Touch and Manipulation
4.5
2012-08
4.5
VS 2012
New Ribbon control New INotifyDataErrorInfo interface
4.5.1
2013-10
4.5.1
VS 2013
No Major Change
4.5.2
2014-05
4.5.2
N/A
No Major Change
4.6
2015-07
4.6
VS 2015
Transparent child window support HDPI and Touch improvements
.NET Framework 4.6.1 – O lançamento do .NET Framework 4.6.1 foi anunciado em 30 de novembro de 2015. Esta versão requer Windows 7 SP1 ou posterior. Novos recursos e APIs incluem:
.NET Framework 4.6.2 – A visualização do .NET Framework 4.6.2 foi anunciada em 30 de março de 2016. Foi lançada em 2 de agosto de 2016. Esta versão requer Windows 7 SP1 ou posterior.
.NET Framework 4.7 – Em 5 de abril de 2017, a Microsoft anunciou que o .NET Framework 4.7 foi integrado ao Windows 10 Creators Update, prometendo um instalador autônomo para outras versões do Windows. Uma atualização para o Visual Studio 2017 foi lançada nesta data para adicionar suporte para o .NET Framework 4.7. O prometido instalador autônomo para o Windows 7 e posterior foi lançado em 2 de maio de 2017, mas tinha pré-requisitos não incluídos no pacote.
.NET Framework 4.7.1 – .NET Framework 4.7.1 foi lançado em 17 de outubro de 2017. Entre as correções e novos recursos, ele corrige um problema de dependência do d3dcompiler. Ele também adiciona compatibilidade com o .NET Standard 2.0 pronto para uso.
.NET Framework 4.7.2 – .NET Framework 4.7.2 foi lançado em 30 de abril de 2018. Entre as mudanças estão melhorias para ASP.NET, BCL, CLR, ClickOnce, Networking, SQL, WCF, Windows Forms, Workflow e WPF. Esta versão está incluída no Server 2019.
.NET Framework 4.8 – .NET Framework 4.8 foi lançado em 18 de abril de 2019. Era a versão final do .NET Framework, todo o trabalho futuro indo para a plataforma .NET Core que eventualmente se tornará .NET 5 em diante. Esta versão incluiu aprimoramentos JIT portados do .NET Core 2.1, aprimoramentos de alta DPI para aplicativos WPF, aprimoramentos de acessibilidade, atualizações de desempenho e aprimoramentos de segurança. Suportava Windows 7, Server 2008 R2, Server 2012, 8.1, Server 2012 R2, 10, Server 2016 e Server 2019 e também enviado como uma imagem de contêiner do Windows. A versão mais recente é 4.8.0 Build 3928, lançada em 25 de julho de 2019 com um tamanho de instalador offline de 111 MB e uma data de assinatura digital de 25 de julho de 2019.
-WIKIPEDIA
Elétron
Lançado em 2013, Electron é desenvolvido e mantido ativamente pelo GitHub e rapidamente forneceu suporte para tecnologias emergentes como o Apple Silicon (cerca de novembro de 2020). Ele não tem o histórico e a longevidade estável necessária para determinar se os aplicativos Electron desenvolvidos em 2020 sobreviverão até 2030. O GitHub é uma subsidiária da Microsoft. Electron oferece uma alternativa gratuita para Delphi e WPF, familiaridade para desenvolvedores front-end e capacidade de plataforma cruzada ao custo de proteção IP, ferramentas IDE padrão e desempenho de aplicativo.
De acordo com o cronograma de lançamento de elétrons (https://www.electronjs.org/docs/tutorial/electron-timelines), aqui estão os lançamentos.
Version
-beta.1
Stable
Chrome
Node
2.0.0
2018-02-21
2018-05-01
M61
v8.9
3.0.0
2018-06-21
2018-09-18
M66
v10.2
4.0.0
2018-10-11
2018-12-20
M69
v10.11
5.0.0
2019-01-22
2019-04-24
M73
v12.0
6.0.0
2019-05-01
2019-07-30
M76
v12.4
7.0.0
2019-08-01
2019-10-22
M78
v12.8
8.0.0
2019-10-24
2020-02-04
M80
v12.13
9.0.0
2020-02-06
2020-05-19
M83
v12.14
10.0.0
2020-05-21
2020-08-25
M85
v12.16
11.0.0
2020-08-27
2020-11-17
M87
v12.18
12.0.0
2020-11-19
2021-03-02
M89
v14.x
A Delphi oferece a perspectiva de longo prazo mais garantida, a melhor segurança de propriedade intelectual e a personalização interna mais fácil ao custo de uma compra única de licença comercial. A barreira de entrada do WPF é menor e oferece melhores opções de acessibilidade, mas está sujeito às revisões do .NET da Microsoft, é mais difícil de personalizar e pode ser descompilado com facilidade. O Electron é totalmente gratuito e pode ser desenvolvido em cada uma das três principais plataformas de desktop, mas paga por essa flexibilidade por meio de sua perspectiva incerta de longo prazo e contando com patrocínios corporativos e suporte da comunidade para desenvolvimento adicional.
Explore todas as métricas no white paper “Descobrindo a melhor estrutura de desenvolvedor por meio de benchmarking”:
Как работают Delphi, WPF .NET Framework и Electron по сравнению друг с другом и как лучше всего провести объективное сравнение? Embarcadero заказал технический документ для исследования различий между Delphi, WPF .NET Framework и Electron для создания настольных приложений Windows. Тестовое приложение — клон калькулятора Windows 10 — было воссоздано в каждой структуре тремя волонтерами Delphi Most Valuable Professionals (MVP), одним экспертом-фрилансером WPF-разработчиком и одним экспертом-фрилансером Electron. В этом сообщении блога мы собираемся изучить метрику долгосрочной осуществимости, которая является частью сравнения функциональности, используемого в техническом документе.
Долгосрочная осуществимость
Когда предприятия выбирают Delphi в качестве среды разработки, они вкладывают средства в проприетарную среду (которая включает исходный код библиотеки времени выполнения) с предоплатой и дополнительной ежегодной платой за обновление. За эту цену они получают стабильную, обратно совместимую и растущую структуру и могут быть уверены, что приложения, разработанные сегодня, будут поддерживаться и обслуживаться в будущем.
Windows Presentation Foundation с .NET Framework предлагает предприятиям экономичную структуру при полной поддержке Microsoft, но включает в себя все проблемы, которые порождает выбор Microsoft. У WPF более короткая история, чем у Delphi, но он был открыт в 2018 году, что может дать некоторым его версиям блестящие долгосрочные перспективы, несмотря на его связь с проприетарной .NET Framework для большинства разработок Windows. По данным Microsoft, .NET Framework 4.8 был последним выпуском 18 апреля 2019 года.
Electron — это бесплатная платформа с открытым исходным кодом, предлагающая компаниям возможность разрабатывать приложения из любой основной операционной системы. Однако будущее Electron’а остается неопределенным. Проект Electron управляется GitHub, который сейчас является дочерней компанией Microsoft. Это новейший из трех фреймворков, который все еще находится на этапе своего медового месяца. В нем отсутствует собственная среда IDE, что дает предприятиям выбор, но при этом исключает некоторые удобства, такие как встроенная компиляция и включенные библиотеки тестирования. Компаниям, разрабатывающим собственные инструменты, будет сложнее использовать Electron, чем другие фреймворки.
Давайте посмотрим на каждый фреймворк.
Delphi
Delphi растет, развивается и расширяется с 1995 года. Его разработка поддерживает обратную совместимость до такой степени, что приложение 1995 года может быть перенесено на текущую версию Delphi с минимальными изменениями. Полная документация помогает в обслуживании, а полная группа поддержки доступна для помощи в обновлении, миграции или устранении неполадок. На момент написания этой статьи последняя версия Delphi доступна в RAD Studio 10.4.1 Sydney, выпущенном 2 сентября 2020 г. Хотите узнать больше? Ознакомьтесь с примечаниями к выпуску многих версий Delphi .
Для некоторого контекста временной шкалы языка программирования C ++ вышел в 1983 году, Python вышел в 1991 году, Java вышел в 1995 году, PHP вышел в 1995 году, JavaScript вышел в 1995 году, а Delphi вышел в 1995 году. 1995 год был годом рождения. для многих из этих языков программирования, как вы можете видеть. Юбилейный веб-сайт Delphi содержит график выпуска Delphi с 1995 года по настоящее время. Вот выдержка из графика релизов за последние 25 лет.
ДЕЛЬФИ 1 — 14 ФЕВРАЛЯ 1995 ГОДА.
Поддержка 16-битной Windows 3.1, двусторонние визуальные инструменты, компоненты / VCL, поддержка базы данных через BDE и SQL Links, данные базы данных в реальном времени во время разработки
ДЕЛФИ 2 (1996)
Поддержка 32-битной Windows 95, Database Grid, OLE-автоматизация, Наследование визуальных форм, Длинные строки, Включенный Delphi 1 для 16-битных
ДЕЛФИ 3 (1997)
Интерфейсы (на основе COM), анализ кода, шаблоны компонентов, отладка DLL, WebBroker, ActiveForms, пакеты компонентов, многоуровневая архитектура MIDAS
ДЕЛФИ 4 (1998)
Пристыковка, привязки и ограничения, перегрузка методов, динамические массивы, поддержка Windows 98
ДЕЛФИ 5 (1999)
Макеты рабочего стола, фреймы, поддержка XML, DBGo для ADO, языковые переводы
Пространства имен с несколькими модулями, анализ ошибок, вкладка истории, for..in, встраивание функций, IDE с поддержкой тем, рефакторинг, подстановочный знак в операторе использования, проводник данных, интегрированное модульное тестирование
ДЕЛФИ 2006 (2005)
Перегрузка оператора, Статические методы и свойства, Рекомендации дизайнера, Представление позиционера формы, Шаблоны динамического кода, Завершение блоков, Номера строк, Панели изменений, Синхронизация-редактирование, Сворачивание кода и навигация по методам, Подсказки для отладки, Палитра инструментов с возможностью поиска, Диспетчер памяти FastMM , Поддержка MySQL, Поддержка Unicode в dbExpress, TTrayIcon, TFlowPanel, TGridPanel
ДЕЛФИ 2007 (2006)
Сборка MS, события сборки, конфигурации сборки, поддержка Windows Vista — стекло, тематика, dbExpress 4 — пул соединений, делегированные драйверы, окна просмотра ЦП, улучшения FastCode, поддержка IntraWeb / AJAX, страница приветствия, Sim-ship of English, French, German , Японский
ДЕЛФИ 2009 (2008)
Юникод, универсальные шаблоны, анонимные методы, элементы управления на ленте, DataSnap, конфигурации сборки, обозреватель классов, окно редактора библиотеки типов, поддержка PNG
ДЕЛФИ 2010 (2009)
Атрибуты, улучшенный RTTI, холст Direct2D, поддержка Windows 7, касания / жесты, форматирование исходного кода, точки останова для конкретных потоков, визуализаторы отладчика, модуль IOUtils для файлов, путей и каталогов, аудит и показатели исходного кода, фоновая компиляция, исходный код для MIDAS. DLL
DELPHI XE (2010 г.)
Библиотека регулярных выражений, интеграция Subversion, dbExpress –Filters, аутентификация, ProxyGeneration, JavaScript Framework, поддержка REST, Indy WebBroker, Cloud — Amazon EC2, Microsoft Azure, группы сборки, именованные потоки в отладчике, аудит командной строки, создание метрик и документации
DELPHI XE2 (2011 г.)
64-битная Windows, Mac OSX, FireMonkey, Live Bindings — FireMonkey и VCL, стили VCL, имена областей единиц, помощник по платформе, DataSnap — коннекторы для мобильных устройств, облачный API, поддержка HTTPS, мониторинг TCP, поддержка dbExpress для драйверов ODBC, развертывание Управляющий делами
DELPHI XE3 (2012 г.)
Пользовательский интерфейс Metropolis для Windows 8, 7, Vista и XP, действия FM, касания / жесты, макеты и привязки, поддержка FM для стилей растровых изображений, источник материала для 3D-компонентов FM, аудио / видео FM, поддержка VCL / FM для сенсорных устройств, FM Компонент датчика местоположения, поддержка виртуальной клавиатуры, поддержка DirectX 10
DELPHI XE4 (АПРЕЛЬ 2013)
Поддержка iOS — устройство, симулятор, магазин приложений iOS, поддержка iOS для стандартных дисплеев и дисплеев Retina, стили iOS, стили Retina, виртуальные клавиатуры, конструктор мобильных форм, компонент TWebBrowser, iOS ARC (автоматический подсчет ссылок) для всех классов TObject, сервисы платформы, Уведомления, компоненты датчиков местоположения, движения и ориентации, компонент TListView, полноэкранная поддержка Mac OSX, диспетчер развертывания для устройств iOS, компоненты универсального доступа к данным FireDAC, InterBase — IBLite и IBToGo
DELPHI XE5 (СЕНТЯБРЬ 2013)
Поддержка Android — устройства и эмулятор. Версии ОС: Jelly Bean, Ice Cream Sandwich и Gingerbread, компонент уведомления, поддержка стиля iOS 7, настраиваемый конструктор форм для мобильных устройств, менеджер развертывания для устройств Android, компоненты доступа и аутентификации клиентов REST Services, поддержка Android для всех XE4 FM и функции базы данных, перечисленные выше
DELPHI XE6 (АПРЕЛЬ 2014)
Стили Windows 7 и 8.1, доступ к облачным базам RESTful WEB Services, FireDAC Совместимость с большим количеством баз данных, Полностью интегрированная поддержка InterBase
DELPHI XE7 (СЕНТЯБРЬ 2014)
Приложения FireMonkey для нескольких устройств поддерживают как настольные, так и мобильные платформы, встраиваемую базу данных IBLite для Windows, Mac, Android и iOS, поддержку нескольких дисплеев, поддержку нескольких касаний и изменение жестов, полноэкранный иммерсивный режим для Android, FireMonkey поддерживает вытягивание. Функция обновления для TListView на iOS и Android, функция сохранения состояния FireMonkey
DELPHI XE8 (АПРЕЛЬ 2015)
GetIt Package Manager, улучшения FireDAC, новая панель инструментов сообщества Embarcadero, встроенная презентация TListView, TSwitch, TMemo, TCalendar, TMultiView и TEdit на iOS, интерактивные карты, новые параметры для библиотеки мультимедиа, InputQuery теперь поддерживает маскирование полей ввода
ДЕЛФИ 10 ‘СИЭТЛ’ (АВГУСТ 2015)
Поддержка фоновых служб Android, поддержка FireDAC для базы данных NoSQL MongoDB, FireMonkey контролирует поддержку zOrder для Windows, новый класс TBeaconDevice для превращения устройства на одной из поддерживаемых платформ в «маяк», StyleViewer для стиля Windows 10 в конструкторе стилей растровых изображений, высокий -DPI-осведомленность и поддержка мониторов 4K, стили Windows 10, поддержка служб Android в среде IDE, поддержка вызова API-интерфейсов WinRT
DELPHI 10.1 ‘BERLIN’ (АПРЕЛЬ 2016)
Поддержка Android 6.0, поддержка Windows Desktop Bridge, адресная книга для iOS и Android, новый конструктор элементов ListView, новый элемент управления CalendarView, QuickEdits для VCL, поддержка высокого разрешения в Windows, изменение свойств подсказки, поддержка сервера EMS Apache, веб-установщик на основе GetIt
DELPHI 10.2 ‘TOKYO’ (МАРТ 2017)
64-битная поддержка Linux для Delphi, FireDAC обеспечивает поддержку Linux для всех СУБД с поддержкой Linux, поддержка MariaDB (v5.5), поддержка MySQL для v5.7 и поддержка Firebird для прямого ввода-вывода, QuickEdits для FMX, новых элементов управления VCL для Windows 10, обновленный внешний вид IDE (темная тема), лицензия на развертывание RAD Server включена
DELPHI 10.3 ‘RIO’ (НОЯБРЬ 2018)
C ++ 17 для Win32, новые возможности языка Delphi, FireMonkey Android zOrder, собственные элементы управления и уровень API 26, Windows 10, улучшения VCL и HighDPI, расширенная модернизация пользовательского интерфейса IDE, расширение архитектуры RAD Server, улучшения качества и производительности
DELPHI 10.3.1 ‘RIO’ (ФЕВРАЛЬ 2019)
Расширенная поддержка устройств серии iOS 12 и iPhone X. Редизайн пользовательского интерфейса консоли RAD Server и переход на платформу Ext JS (доступно через GetIt). Улучшена поддержка FireDAC для Firebird 3.0.4 и встроенных Firebird. Улучшения клиентской библиотеки HTTP и SOAP в Windows. Два новых инструмента повышения производительности IDE: закладки и навигатор. 15 новых пользовательских стилей VCL Windows и Multi-Device FireMonkey.
DELPHI 10.3.2 ‘RIO’ (ИЮЛЬ 2019)
64-разрядная версия Delphi macOS, C ++ 17 для 64-разрядной версии Windows, Улучшения C ++ LSP Code Insight, Мастера RAD Server и улучшения развертывания, Расширенная поддержка Firebase Android, Поддержка клиентских приложений Delphi Linux
DELPHI 10.3.3 ‘RIO’ (НОЯБРЬ 2019)
Поддержка 64-разрядной версии Delphi Android, поддержка iOS 13 и macOS Catalina (Delphi), развертывание RAD Server Docker, корпоративные соединители в версии Enterprise и Architect Edition
DELPHI 10.4 ‘СИДНЕЙ’ (МАЙ 2020)
Значительно улучшенная высокопроизводительная встроенная поддержка Windows, повышенная производительность благодаря невероятно быстрому завершению кода, более быстрый код с управляемыми записями и расширенные параллельные задачи на современных многоядерных процессорах, более 1000 улучшений качества и производительности и многое другое.
DELPHI 10.4.1 ‘СИДНЕЙ’ (СЕНТЯБРЬ 2020)
RAD Studio 10.4.1 уделяет большое внимание улучшению качества IDE, Delphi Code Insight (LSP), параллельной библиотеки, SOAP и XML, C ++ Toolchain, FireMonkey, VCL, компилятора Delphi и развертывания iOS.
WPF .NET Framework
Выпущенный в 2006 году, WPF разрабатывался вместе с платформой .NET. Он был открыт Microsoft в 2018 году и предоставил несколько дорожных карт, указывающих на участие сообщества и рост в ближайшем будущем. Существенные изменения .NET и меняющиеся проектные решения Microsoft влияют на долгосрочную осуществимость WPF. WPF .NET Framework 4.8 была последней версией .NET Framework по версии Microsoft и была выпущена 18 апреля 2019 года.
WPF был представлен в .NET Framework 3.0 в 2006 году. Согласно статье на веб-сайте CodeProject версии и улучшения WPF перечислены в этой таблице:
WPF Version
Release (YYYY-MM)
.NET Version
Visual Studio Version
Major Features
3.0
2006-11
3.0
N/A
Initial Release. WPF development can be done with VS 2005 (released in Nov 2005) too with few additions.
3.5
2007-11
3.5
VS 2008
Changes and improvements in: Application model, data binding, controls, documents, annotations, and 3-D UI elements.
3.5 SP1
2008-08
3.5 SP1
N/A
Native splash screen support, New WebBrowser control, DirectX pixel shader support. Faster startup time and improved performance for Bitmap effects.
4.0
2010-04
4.0
VS 2010
New controls: Calendar, DataGrid, and DatePicker. Multi-Touch and Manipulation
4.5
2012-08
4.5
VS 2012
New Ribbon control New INotifyDataErrorInfo interface
4.5.1
2013-10
4.5.1
VS 2013
No Major Change
4.5.2
2014-05
4.5.2
N/A
No Major Change
4.6
2015-07
4.6
VS 2015
Transparent child window support HDPI and Touch improvements
.NET Framework 4.6.1 — 30 ноября 2015 года было объявлено о выпуске .NET Framework 4.6.1. Для этой версии требуется Windows 7 SP1 или более поздняя версия. Новые функции и API включают:
.NET Framework 4.6.2 — предварительная версия .NET Framework 4.6.2 была анонсирована 30 марта 2016 г. Она была выпущена 2 августа 2016 г. Для этой версии требуется Windows 7 с пакетом обновления 1 (SP1) или более поздней версии.
.NET Framework 4.7 — 5 апреля 2017 года Microsoft объявила, что .NET Framework 4.7 интегрирована в Windows 10 Creators Update, пообещав автономный установщик для других версий Windows. В этот день было выпущено обновление для Visual Studio 2017, в которое добавлена поддержка для .NET Framework 4.7. Обещанный автономный установщик для Windows 7 и более поздних версий был выпущен 2 мая 2017 г., но в нем были предварительные условия, не включенные в пакет.
.NET Framework 4.7.1 — .NET Framework 4.7.1 была выпущена 17 октября 2017 г. Среди исправлений и новых функций исправлена проблема зависимости d3dcompiler. Он также добавляет совместимость с .NET Standard 2.0 из коробки.
.NET Framework 4.7.2 — .NET Framework 4.7.2 был выпущен 30 апреля 2018 г. Среди изменений — улучшения в ASP.NET, BCL, CLR, ClickOnce, Networking, SQL, WCF, Windows Forms, Workflow и WPF. Эта версия входит в состав Server 2019.
.NET Framework 4.8 — .NET Framework 4.8 была выпущена 18 апреля 2019 года. Это была последняя версия .NET Framework, вся будущая работа была направлена на платформу .NET Core, которая в конечном итоге станет .NET 5 и новее. В этот выпуск включены улучшения JIT, перенесенные из .NET Core 2.1, улучшения высокого разрешения для приложений WPF, улучшения специальных возможностей, обновления производительности и улучшения безопасности. Он поддерживает Windows 7, Server 2008 R2, Server 2012, 8.1, Server 2012 R2, 10, Server 2016 и Server 2019, а также поставляется в виде образа контейнера Windows. Самый последний выпуск — 4.8.0 Build 3928, выпущенный 25 июля 2019 г., с размером автономного установщика 111 МБ и датой цифровой подписи 25 июля 2019 г.
-ВИКИПЕДИЯ
Электрон
Выпущенный в 2013 году, Electron активно разрабатывается и поддерживается GitHub и быстро обеспечивает поддержку новых технологий, таких как Apple Silicon (около ноября 2020 года). Ему не хватает истории и стабильной долговечности, необходимых для определения того, доживут ли приложения Electron, созданные в 2020 году, до 2030 года. GitHub является дочерней компанией Microsoft. Electron предлагает бесплатную альтернативу Delphi и WPF, знакомство с интерфейсными разработчиками и кроссплатформенные возможности за счет защиты IP, стандартных инструментов IDE и производительности приложений.
Согласно графику выпуска Electron (https://www.electronjs.org/docs/tutorial/electron-timelines), вот выпуски.
Version
-beta.1
Stable
Chrome
Node
2.0.0
2018-02-21
2018-05-01
M61
v8.9
3.0.0
2018-06-21
2018-09-18
M66
v10.2
4.0.0
2018-10-11
2018-12-20
M69
v10.11
5.0.0
2019-01-22
2019-04-24
M73
v12.0
6.0.0
2019-05-01
2019-07-30
M76
v12.4
7.0.0
2019-08-01
2019-10-22
M78
v12.8
8.0.0
2019-10-24
2020-02-04
M80
v12.13
9.0.0
2020-02-06
2020-05-19
M83
v12.14
10.0.0
2020-05-21
2020-08-25
M85
v12.16
11.0.0
2020-08-27
2020-11-17
M87
v12.18
12.0.0
2020-11-19
2021-03-02
M89
v14.x
Delphi обеспечивает самые надежные долгосрочные перспективы, лучшую защиту интеллектуальной собственности и простую внутреннюю настройку за счет единовременной покупки коммерческой лицензии. Барьер входа в WPF ниже, и он предлагает лучшие варианты доступности, но подлежит капитальному ремонту Microsoft .NET, его сложнее настраивать и легко декомпилировать. Electron абсолютно бесплатен и может быть разработан на каждой из трех основных настольных платформ, но за эту гибкость платит неопределенная долгосрочная перспектива и полагаясь на корпоративное спонсорство и поддержку сообщества для дополнительных разработок.
Изучите все показатели в техническом документе «Обнаружение лучшей среды разработки с помощью сравнительного анализа»:
In my previous article Tweaking DFM Loading I explained ways to overcome the name problem with several instances of a data module. Actually there is still another way to resolve this.
In case you didn’t already, you should read that article first to understand what I am talking about here.
The FOnFindComponentInstance checks Screen.DataModules for a data module with the name in question. If we can avoid adding our data module to that list, the name of the newly created TMainDM instance will remain unchanged.
To skip adding a TDataModule instance to the Screen.DataModules list, we need to override its CreateNew method and change the Dummy parameter to -1.
As a consequence our global instance of TMainDM will not be found for TMainForm either. Thus we need to create a local instance of TMainDM in TMainForm, too.
constructor TMainForm.Create(AOwner: TComponent);
begin
TMainDM.Create(Self);
inherited;
end;
Now we don’t need that global instance anymore and remove TMainDM from the auto-created list in the project.
Die Fähigkeit eines Frameworks zur Unterstützung der Entwicklerproduktivität ist ein Maß für seine Fähigkeit, die Zeit für die Markteinführung einer Anwendung zu verkürzen, und für seinen Einfluss auf die langfristigen Arbeitskosten. Die Entwicklerproduktivität wirkt sich daher direkt auf die Nachhaltigkeit und Rentabilität eines Unternehmens aus.
Eine zentrale Metrik für die Produktivität ist die Entwicklungszeit , dh die Gesamtstundenzahl, die erforderlich ist, um eine voll funktionsfähige Anwendung von Grund auf neu zu schreiben. Die Entwicklungszeit wird durch die Nützlichkeit der Produktivitätswerkzeuge, der Dokumentation, der Bibliotheken, der Code-Vervollständigung und anderer Werkzeuge eines Frameworks beeinflusst, die die Entwicklung beschleunigen. Eine weitere Kennzahl für die Entwicklungszeit ist, wie Sie die Software nach dem Erstellen bereitstellen.
Wie kann sich Delphi mit anderen Frameworks messen, die zum Erstellen von Windows-Desktopanwendungen verwendet werden? Embarcadero gab ein Whitepaper inAuftrag , um die Leistungsunterschiede zwischen Delphi, WPF .NET Framework und Electron mithilfe einer einfachen App als Benchmark zu untersuchen. Drei freiwillige Delphi Most Valuable Professionals (MVPs), ein freiberuflicher WPF-Experte und ein freiberuflicher Electron-Entwickler haben die Benchmark-Anwendung – einen Windows 10 Calculator-Klon – in jedem Framework neu erstellt.
Die Frameworks wurden basierend auf einer Reihe von Metriken bewertet, die die Leistung in Bezug auf Entwicklerproduktivität, Geschäftsfunktionalität, Flexibilität der Framework-Anwendungen und Endproduktleistung messen. In diesem Blog-Beitrag werden wir die Metrik „App Store Deployment“ als eine der 23 Metriken untersuchen, die im Benchmarking verwendet werden.
Wie schnell zu den App Stores?
Mit der Metrik „App Store-Bereitstellung“ sollte gemessen werden, wie die IDE jedes Frameworks die direkte Bereitstellung in nativen Plattformanwendungsspeichern (z. B. iOS App Store, Android Google Play, Microsoft Store) ermöglicht. Frameworks mit integrierten Bereitstellungsfunktionen reduzieren die Komplexität der Produktbereitstellung. Begrenzung von Fehlern, die auftreten oder sich verschlimmern können, und Time-to-Market für erste Produkte und Updates / Fehlerbehebungen.
Eine gute Messgröße für die Produktivität der Produktentwicklung ist die Zeit, die erforderlich ist, um die Anwendung für den Benutzer bereitzustellen. Delphi erzielt in dieser Metrik Bestnoten. Die RAD Studio-IDE automatisiert die Erstellung von Paketen zum Hochladen in die App Stores für alle wichtigen Desktop- und Mobilanwendungen. Dadurch entfällt die manuelle Bereitstellung und es wird sichergestellt, dass der Vorgang wiederholt reibungslos verläuft. WPF und Electron kämpfen in dieser Hinsicht – WPF kann ohne Konvertierung in ein anderes Framework nicht direkt im Microsoft Store bereitgestellt werden, und Electron kann nur mithilfe von Tools von Drittanbietern im Microsoft Store bereitgestellt werden. Unternehmen sollten diesen Aspekt der „letzten Meile“ bei der Produktentwicklung und -bereitstellung berücksichtigen, wenn sie ein Framework für ihre Anwendung auswählen.
Schauen wir uns jedes Framework einzeln genauer an.
Delphi
Das VCL-Framework von Delphi kann direkt im Microsoft Store-Format bereitgestellt werden. Das FMX-Framework von Delphi kann Anwendungen direkt im Microsoft Store-Format, im Apple App Store-Format und im Google Play App Store-Format für Android bereitstellen. In einigen Fällen führt diese Bereitstellung zu einem Plattformpaket wie einem APK oder IPA, das hochgeladen werden muss. Bereitstellung für Android und iOS und nicht explizit im Whitepaper behandelt, aber Delphi bietet diese Funktionen.
WPF-Anwendungen können nicht direkt in einem App Store bereitgestellt werden. Durch die Konvertierung auf die Universal Windows Platform (UWP) können WPF .NET Framework-Apps im Microsoft Store bereitgestellt werden, und durch die Konvertierung in Xamarin können Sie auf mobile App Stores zugreifen.
Elektronenanwendungen können für den Microsoft Store gepackt werden, werden dort jedoch standardmäßig nicht direkt bereitgestellt. Optionen von Drittanbietern schließen den Bereitstellungsprozess ab. Elektronen-Apps können auch für den Apple App Store gepackt werden, dem Prozess fehlt jedoch die Automatisierungshilfe.
Electron Apps könnten das MSIX Packaging Tool verwenden, aber grob gesagt handelt es sich um eine Installation von Drittanbietern.
Entdecken Sie alle Metriken im Whitepaper „Ermitteln des besten Entwickler-Frameworks durch Benchmarking“:
La capacidad de un marco para respaldar la productividad del desarrollador es una medida de su capacidad para acelerar el tiempo que lleva llevar una aplicación al mercado y su influencia en los costos laborales a largo plazo. La productividad del desarrollador, por lo tanto, afecta directamente la sostenibilidad y la rentabilidad de un negocio.
Una métrica de productividad fundamental es el tiempo de desarrollo , o en otras palabras, el número total de horas necesarias para escribir una aplicación completamente funcional desde cero. El tiempo de desarrollo se ve afectado por la utilidad de las herramientas de productividad de un marco, la documentación, las bibliotecas, la finalización de código y otras herramientas que aceleran el desarrollo. Otra métrica relacionada con el tiempo de desarrollo es una vez que haya creado su software, ¿cómo lo implementa?
¿Cómo se compara Delphi con otros marcos utilizados para crear aplicaciones de escritorio de Windows? Embarcadero encargó un documento técnico para investigar las diferencias de rendimiento entre Delphi, WPF .NET Framework y Electron utilizando una aplicación simple como punto de referencia. Tres profesionales voluntarios de Delphi Most Valuable Professionals (MVP), un desarrollador independiente experto de WPF y un desarrollador independiente experto de Electron recrearon la aplicación de referencia, un clon de la calculadora de Windows 10, en cada marco.
Los marcos se evaluaron en función de un conjunto de métricas que miden el rendimiento en términos de productividad del desarrollador, funcionalidad comercial, flexibilidad de la aplicación del marco y rendimiento del producto final. En esta publicación de blog, vamos a explorar la métrica “Implementación de la tienda de aplicaciones” como una de las 23 métricas utilizadas en la evaluación comparativa.
¿Qué tan rápido para las tiendas de aplicaciones?
La intención detrás de la métrica “Implementación de la tienda de aplicaciones” era medir cómo el IDE de cada marco facilita la implementación directa en las tiendas de aplicaciones de la plataforma nativa (es decir, la App Store de iOS, Google Play de Android, Microsoft Store). Los marcos con características de implementación integradas reducen la complejidad de la implementación del producto, limitar los errores que podrían ocurrir o agravarse, y el tiempo de comercialización de los productos iniciales y actualizaciones / correcciones de errores.
Una buena métrica de la productividad del desarrollo de productos es el tiempo necesario para hacer llegar la aplicación al usuario. Delphi obtiene las mejores calificaciones en esta métrica. RAD Studio IDE automatiza la creación de paquetes para cargar en las tiendas de aplicaciones para todas las principales aplicaciones de escritorio y móviles, eliminando el dolor de cabeza de la implementación manual y asegurando que el proceso se desarrolle sin problemas repetidamente. WPF y Electron luchan en este sentido: WPF no se puede implementar directamente en Microsoft Store sin conversión a un marco diferente y Electron solo se puede implementar en Microsoft Store con la ayuda de herramientas de terceros. Las empresas deben tener en cuenta este aspecto de la “última milla” del desarrollo y la implementación de productos al seleccionar un marco para su aplicación.
Echemos un vistazo más de cerca a cada marco por separado.
Delphi
El marco VCL de Delphi se puede implementar directamente en el formato de Microsoft Store. El marco FMX de Delphi puede implementar aplicaciones directamente en el formato de Microsoft Store, el formato de Apple App Store y el formato de la tienda de aplicaciones de Google Play para Android. En algunos casos, esta implementación da como resultado un paquete de plataforma, como un APK o IPA, que debe cargarse. Implementación en Android e iOS y no explícitamente algo cubierto en el documento técnico, pero Delphi ofrece esas capacidades.
Las aplicaciones de WPF no se pueden implementar directamente en ninguna tienda de aplicaciones. Una conversión a la Plataforma universal de Windows (UWP) permite que las aplicaciones de WPF .NET Framework se implementen en Microsoft Store y la conversión a Xamarin proporciona acceso a las tiendas de aplicaciones móviles.
Las aplicaciones de Electron se pueden empaquetar para Microsoft Store, pero no se implementarán allí directamente de forma predeterminada. Las opciones de terceros completan el proceso de implementación. Las aplicaciones de Electron también se pueden empaquetar para la App Store de Apple, pero el proceso carece de ayuda de automatización.
Las aplicaciones de Electron podrían usar la herramienta de empaquetado MSIX pero, en general, es una instalación de terceros.
Explore todas las métricas en el documento técnico “Descubriendo el mejor marco para desarrolladores a través de la evaluación comparativa”:
A capacidade de uma estrutura de oferecer suporte à produtividade do desenvolvedor é uma medida de sua capacidade de acelerar o tempo necessário para levar um aplicativo ao mercado e sua influência nos custos de mão de obra de longo prazo. A produtividade do desenvolvedor, portanto, afeta diretamente a sustentabilidade e a lucratividade de um negócio.
Uma métrica principal de produtividade é o tempo de desenvolvimento , ou em outras palavras, o número total de horas necessárias para escrever um aplicativo totalmente funcional do zero. O tempo de desenvolvimento é afetado pela utilidade das ferramentas de produtividade, documentação, bibliotecas, conclusão de código e outras ferramentas de um framework que aceleram o desenvolvimento. Outra métrica relacionada ao tempo de desenvolvimento é, uma vez que você construiu seu software, como você o implementa?
Como o Delphi se compara a outras estruturas usadas para construir aplicativos de desktop do Windows? A Embarcadero encomendou um white paper para investigar as diferenças de desempenho entre Delphi, WPF .NET Framework e Electron usando um aplicativo simples como referência. Três Delphi Most Valuable Professionals (MVPs) voluntários, um desenvolvedor WPF freelance especializado e um desenvolvedor freelance Electron especializado recriaram o aplicativo de benchmark – um clone da Calculadora do Windows 10 – em cada estrutura.
As estruturas foram avaliadas com base em um conjunto de métricas que medem o desempenho em termos de produtividade do desenvolvedor, funcionalidade de negócios, flexibilidade do aplicativo da estrutura e desempenho do produto final. Nesta postagem do blog, vamos explorar a métrica “Implantação da App Store” como uma das 23 métricas usadas no benchmarking.
Quão rápido para as App Stores?
A intenção por trás da métrica “App Store Deployment” era medir como cada IDE de estrutura facilita a implantação direta em lojas de aplicativos de plataforma nativa (ou seja, iOS App Store, Android’s Google Play, Microsoft Store). Frameworks com recursos de implantação integrados reduzem a complexidade de implantação do produto, limitação de erros que podem ocorrer ou compostos e o tempo de colocação no mercado de produtos iniciais e atualizações / correções de bugs
Uma boa métrica de produtividade de desenvolvimento de produto é o tempo necessário para levar o aplicativo ao usuário. A Delphi obtém as melhores notas nesta métrica. O RAD Studio IDE automatiza a criação de pacotes para carregar nas lojas de aplicativos para todos os principais aplicativos de desktop e móveis, eliminando a dor de cabeça da implantação manual e garantindo que o processo ocorra sem problemas repetidamente. O WPF e o Electron lutam a esse respeito – o WPF não pode ser implantado diretamente na Microsoft Store sem a conversão para uma estrutura diferente e o Electron só pode ser implantado na Microsoft Store com a ajuda de ferramentas de terceiros. As empresas devem manter esse aspecto da “última milha” do desenvolvimento e implantação do produto em mente ao selecionar uma estrutura para seu aplicativo.
Vamos dar uma olhada em cada estrutura separadamente.
Delphi
A estrutura VCL da Delphi pode ser implantada diretamente no formato Microsoft Store. A estrutura FMX da Delphi pode implantar aplicativos diretamente no formato Microsoft Store, no formato Apple App Store e no formato Google Play app store para Android. Em alguns casos, essa implantação resulta em um pacote de plataforma, como um APK ou IPA, que deve ser carregado. Implantação para Android e iOS e não explicitamente algo abordado no white paper, mas a Delphi oferece esses recursos.
Os aplicativos WPF não podem ser implantados diretamente em nenhuma loja de aplicativos. Uma conversão para a Plataforma Universal do Windows (UWP) permite que aplicativos WPF .NET Framework sejam implantados na Microsoft Store e a conversão para Xamarin fornece acesso a lojas de aplicativos móveis.
Os aplicativos Electron podem ser empacotados para a Windows Store, mas não serão implantados lá diretamente por padrão. As opções de terceiros concluem o processo de implantação. Os aplicativos Electron também podem ser empacotados para a Apple App Store, mas o processo carece de ajuda de automação.
Aplicativos Electron seriam capazes de usar o MSIX Packaging Tool, mas geralmente é uma instalação de terceiros.
Explore todas as métricas no white paper “Descobrindo a melhor estrutura de desenvolvedor por meio de benchmarking”:
Способность фреймворка поддерживать продуктивность разработчиков — это мера его способности сократить время, необходимое для вывода приложения на рынок, и его влияние на долгосрочные затраты на рабочую силу. Таким образом, продуктивность разработчиков напрямую влияет на устойчивость и прибыльность бизнеса.
Одним из основных показателей производительности является время разработки , или, другими словами, общее количество часов, необходимых для написания полнофункционального приложения с нуля. На время разработки влияет полезность инструментов повышения производительности, документации, библиотек, автозавершения кода и других инструментов, ускоряющих разработку. Еще одна метрика, связанная со временем разработки, — это когда вы создали свое программное обеспечение, как вы его развертываете?
Чем отличается Delphi от других фреймворков, используемых для создания настольных приложений Windows? Embarcadero заказал технический документ для исследования различий в производительности между Delphi, WPF .NET Framework и Electron с использованием простого приложения в качестве теста. Три волонтера Delphi Most Valuable Professionals (MVP), один эксперт-фрилансер WPF-разработчик и один эксперт-фрилансер Electron воссоздали тестовое приложение — клон Windows 10 Calculator — в каждой платформе.
Фреймворки оценивались на основе набора показателей, измеряющих производительность с точки зрения производительности разработчиков, бизнес-функциональности, гибкости приложений фреймворка и производительности конечного продукта. В этом сообщении блога мы собираемся изучить метрику «Развертывание магазина приложений» как одну из 23 метрик, используемых в тестировании.
Как быстро попасть в магазины приложений?
Цель метрики «Развертывание магазина приложений» заключалась в том, чтобы измерить, насколько IDE каждой платформы способствует прямому развертыванию в магазинах приложений собственной платформы (например, iOS App Store, Android Google Play, Microsoft Store). Платформы со встроенными функциями развертывания снижают сложность развертывания продукта ограничение ошибок, которые могут возникнуть или усугубиться, а также время выхода на рынок исходных продуктов и обновлений / исправлений ошибок.
Хорошим показателем продуктивности разработки продукта является время, необходимое для доставки приложения пользователю. Delphi получает высшие оценки по этому показателю. RAD Studio IDE автоматизирует создание пакетов для загрузки в магазины приложений для всех основных настольных и мобильных приложений, устраняя головную боль ручного развертывания и обеспечивая бесперебойную повторяемость процесса. WPF и Electron борются в этом отношении — WPF нельзя развернуть непосредственно в Microsoft Store без преобразования в другую платформу, а Electron можно развернуть в Microsoft Store только с помощью сторонних инструментов. Компании должны помнить об этом аспекте «последней мили» при разработке и развертывании продукта при выборе платформы для своего приложения.
Рассмотрим подробнее каждый фреймворк в отдельности.
Delphi
Инфраструктура VCL Delphi может быть развернута непосредственно в формате Microsoft Store. Фреймворк Delphi FMX может развертывать приложения непосредственно в формате Microsoft Store, формате Apple App Store и формате магазина приложений Google Play для Android. В некоторых случаях это развертывание приводит к тому, что необходимо загрузить пакет платформы, такой как APK или IPA. Развертывание на Android и iOS, а не то, что явно описано в официальном документе, но Delphi действительно предлагает эти возможности.
Приложения WPF нельзя напрямую развернуть в любом магазине приложений. Преобразование в универсальную платформу Windows (UWP) позволяет развертывать приложения WPF .NET Framework в Microsoft Store, а преобразование в Xamarin обеспечивает доступ к магазинам мобильных приложений.
Приложения Electron можно упаковать для Microsoft Store, но по умолчанию они не будут развернуты там напрямую. Сторонние варианты завершают процесс развертывания. Приложения Electron также могут быть упакованы для Apple App Store, но в этом процессе отсутствует помощь в автоматизации.
Приложения Electron могут использовать MSIX Packaging Tool, но, грубо говоря, это сторонняя установка.
Изучите все показатели в техническом документе «Обнаружение лучшей среды разработки с помощью сравнительного анализа»:
Wie verhalten sich Delphi, WPF .NET Framework und Electron im Vergleich zueinander und wie lässt sich ein objektiver Vergleich am besten durchführen? Embarcadero gab ein Whitepaper in Auftrag , um die Unterschiede zwischen Delphi, WPF .NET Framework und Electron beim Erstellen von Windows-Desktopanwendungen zu untersuchen. Die Benchmark-Anwendung – ein Windows 10 Calculator-Klon – wurde in jedem Framework von drei freiwilligen Mitarbeitern von Delphi Most Valuable Professionals (MVPs), einem freiberuflichen WPF-Experten und einem freiberuflichen Electron-Entwickler neu erstellt. In diesem Blogbeitrag werden wir die Entwicklungszeitmetrik untersuchen, die Teil des im Whitepaper verwendeten Produktivitätsvergleichs ist.
Entwickler haben heute den Luxus, aus einer Vielzahl verfügbarer Frameworks auszuwählen, mit denen Entwicklungsaufgaben für verschiedene Plattformen implementiert werden können. Die Fülle verfügbarer Lösungen für jeden Prozess kann manchmal als Hindernis erscheinen, das als Vorteil getarnt ist. Diese Fülle kann zu Verwirrung darüber führen, welches Framework für eine bestimmte Plattform oder ein bestimmtes Projekt am besten geeignet ist, und erfordert die Verwendung eines Systems des rationalen Vergleichs zwischen Frameworks, IDEs und Tools. Eine genaue kritische Bewertung der Vor- und Nachteile gängiger Frameworks und IDEs ist entscheidend, um das „Shiny-Object-Syndrom“ zu überwinden und eine langfristige Lösung zu finden, die die erwartete Funktionalität und Leistung bietet.
Wie kann eine kritische Bewertung auf ein Key Developer Tool wie eine IDE angewendet werden?
Embarcadero ging diese Herausforderung an, indem er eine Benchmarking-Methode definierte, bei der eine Taschenrechneranwendung zum Vergleich zwischen Delphi, Windows Presentation Foundation (WPF) mit .NET Framework und Electron entwickelt wurde. Die Ergebnisse stützten Schlussfolgerungen über die Produktivität, Funktionalität, Flexibilität und Leistung jedes Frameworks. Diese Schlussfolgerungen wurden in einem Whitepaper mit dem Titel „ Ermittlung des besten Entwickler-Frameworks durch Benchmarking “ veröffentlicht.
Benchmarking
Als Benchmarking-Strategie für den Vergleich zwischen den drei Frameworks haben die teilnehmenden Entwickler einen Klon des Windows 10-Rechners „Standard“ erstellt. Ziel war es, die Leistung jedes Frameworks mit einer Belohnung für einen bestimmten Satz von Metriken zu testen und Vergleiche nebeneinander zu ermöglichen. Die Frameworks wurden basierend auf einer Reihe von Metriken bewertet, die die Leistung in Bezug auf Entwicklerproduktivität, Geschäftsfunktionalität, Flexibilität der Framework-Anwendungen und Endproduktleistung messen.
Entwicklerproduktivität
Die Fähigkeit eines Frameworks zur Unterstützung der Entwicklerproduktivität ist ein Maß für seine Fähigkeit, die Zeit für die Markteinführung einer Anwendung zu verkürzen, und für seinen Einfluss auf die langfristigen Arbeitskosten. Die Entwicklerproduktivität wirkt sich daher direkt auf die Nachhaltigkeit und Rentabilität eines Unternehmens aus. Eine zentrale Metrik für die Produktivität ist die Entwicklungszeit, dh die Gesamtstundenzahl, die erforderlich ist, um eine voll funktionsfähige Anwendung von Grund auf neu zu schreiben. Diese Metrik wird durch die Nützlichkeit der Produktivitätswerkzeuge, der Dokumentation, der Bibliotheken, der Codevervollständigung und anderer Werkzeuge eines Frameworks beeinflusst, die die Entwicklung beschleunigen.
Schauen wir uns jedes Framework einzeln genauer an.
Delphi
Drei erfahrene Delphi-Entwickler haben den Rechner mit RAD Studio in durchschnittlich 4,66 Stunden fertiggestellt. Ein Entwickler verwendete seinen Delphi-Rechnercode und eine Bibliothek eines Drittanbieters, um in 7 Minuten einen Elektronenrechner zu erstellen, der die Wiederverwendbarkeit des Codes von Delphi demonstrierte. Das unten gezeigte Video ist ein Zeitraffer des Builds in Delphi.
WPF .NET Framework
Ein erfahrener WPF-Entwickler hat den Rechner in 30 Stunden mit Visual Studio fertiggestellt. 16 weitere WPF-Schätzungen gingen von 8 bis 100 Stunden mit einem Mittelwert von 53 Stunden und einem Modus von 80 Stunden ein. Das unten gezeigte Video ist ein Zeitraffer des Builds in WPF.
Elektron
Ein erfahrener Elektronenentwickler hat den Rechner in 10 Stunden mit Angular für die Rechnerlogik und Electron für die GUI fertiggestellt. Acht weitere Elektronenschätzungen gingen von 15 bis 80 Stunden mit einem Mittelwert von 47 Stunden und einem Modus von 20 Stunden ein. Das unten gezeigte Video ist ein Zeitraffer des Builds in Electron.
Scores
Entdecken Sie alle Metriken im Whitepaper „Ermitteln des besten Entwickler-Frameworks durch Benchmarking“:
¿Cómo funcionan Delphi, WPF .NET Framework y Electron en comparación entre sí, y cuál es la mejor manera de hacer una comparación objetiva? Embarcadero encargó un documento técnico para investigar las diferencias entre Delphi, WPF .NET Framework y Electron para crear aplicaciones de escritorio de Windows. La aplicación de referencia, un clon de la Calculadora de Windows 10, fue recreada en cada marco por tres voluntarios de Delphi Most Valuable Professionals (MVP), un desarrollador experto independiente de WPF y un desarrollador experto independiente Electron. En esta publicación de blog, vamos a explorar la métrica de tiempo de desarrollo, que es parte de la comparación de productividad utilizada en el documento técnico.
Los desarrolladores de hoy tienen el lujo de elegir entre una variedad de marcos disponibles que permiten implementar tareas de desarrollo para diferentes plataformas. La plenitud de soluciones disponibles para cualquier proceso puede parecer en ocasiones un obstáculo disfrazado de beneficio. Es esta plenitud la que puede generar confusión sobre qué marco es el mejor para una plataforma o proyecto determinado, y requiere el uso de un sistema de comparación racional entre marcos, IDE y herramientas. Realizar evaluaciones críticas precisas de los beneficios y los inconvenientes de los marcos de trabajo e IDE comunes es vital para trascender el “síndrome del objeto brillante” y encontrar una solución a largo plazo que pueda ofrecer la funcionalidad y el rendimiento esperados.
¿Cómo se puede aplicar una evaluación crítica a una herramienta de desarrollo clave como un IDE?
Embarcadero abordó este desafío definiendo una metodología de evaluación comparativa utilizando el desarrollo de una aplicación de calculadora para la comparación entre Delphi, Windows Presentation Foundation (WPF) con .NET Framework y Electron. Los resultados respaldaron las conclusiones sobre la productividad, la funcionalidad, la flexibilidad y el rendimiento de cada marco, y estas conclusiones se publicaron en un documento técnico titulado ” Descubriendo el mejor marco para desarrolladores a través de la evaluación comparativa “.
Benchmarking
Como estrategia de evaluación comparativa para la comparación entre los tres marcos, los desarrolladores participantes crearon un clon de la calculadora “Estándar” de Windows 10. La intención era probar el desempeño de cada marco con recompensa a un conjunto específico de métricas y permitir comparaciones lado a lado. Los marcos se evaluaron en función de un conjunto de métricas que miden el rendimiento en términos de productividad del desarrollador, funcionalidad comercial, flexibilidad de la aplicación del marco y rendimiento del producto final.
Productividad del desarrollador
La capacidad de un marco para respaldar la productividad del desarrollador es una medida de su capacidad para acelerar el tiempo que lleva llevar una aplicación al mercado y su influencia en los costos laborales a largo plazo. La productividad del desarrollador, por lo tanto, afecta directamente la sostenibilidad y la rentabilidad de un negocio. Una métrica de productividad fundamental es el tiempo de desarrollo, o en otras palabras, la cantidad total de horas necesarias para escribir una aplicación completamente funcional desde cero. Esta métrica se ve afectada por la utilidad de las herramientas de productividad, la documentación, las bibliotecas, la finalización del código y otras herramientas de un marco que aceleran el desarrollo.
Echemos un vistazo más de cerca a cada marco por separado.
Delphi
Tres desarrolladores expertos de Delphi completaron la Calculadora en un promedio de 4.66 horas usando RAD Studio. Un desarrollador usó su código de calculadora Delphi y una biblioteca de terceros para crear una calculadora Electron en 7 minutos, demostrando la capacidad de reutilización del código de Delphi. El video que se muestra a continuación es un lapso de tiempo de la compilación en Delphi.
WPF .NET Framework
Un desarrollador experto de WPF completó la Calculadora en 30 horas usando Visual Studio. Se recibieron otras 16 estimaciones de WPF que van desde 8 horas a 100 horas con una media de 53 horas y una moda de 80 horas. El video que se muestra a continuación es un lapso de tiempo de la compilación en WPF.
Electrón
Un desarrollador experto de Electron completó la Calculadora en 10 horas usando Angular para la lógica de la calculadora y Electron para la GUI. Se recibieron otras ocho estimaciones de Electron que van desde 15 a 80 horas con una media de 47 horas y una moda de 20 horas. El video que se muestra a continuación es un lapso de tiempo de la construcción en Electron.
Puntuaciones
Explore todas las métricas en el documento técnico “Descubriendo el mejor marco para desarrolladores a través de la evaluación comparativa”:
Qual é o desempenho do Delphi, do WPF .NET Framework e do Electron em comparação entre si, e qual é a melhor maneira de fazer uma comparação objetiva? A Embarcadero encomendou um white paper para investigar as diferenças entre Delphi, WPF .NET Framework e Electron para a construção de aplicativos de desktop do Windows. O aplicativo de benchmark – um clone da Calculadora do Windows 10 – foi recriado em cada estrutura por três voluntários Delphi Most Valuable Professionals (MVPs), um desenvolvedor WPF freelance especializado e um desenvolvedor freelance especializado em Electron. Nesta postagem do blog, vamos explorar a métrica de tempo de desenvolvimento, que faz parte da comparação de produtividade usada no white paper.
Os desenvolvedores hoje podem se dar ao luxo de escolher entre uma variedade de estruturas disponíveis que permitem que as tarefas de desenvolvimento sejam implementadas para plataformas diferentes. A plenitude de soluções disponíveis para qualquer processo pode às vezes parecer um obstáculo disfarçado de benefício. É esta plenitude que pode levar à confusão sobre qual framework é o melhor para uma determinada plataforma ou projeto, e requer o uso de um sistema de comparação racional entre frameworks, IDEs e ferramentas. Fazer avaliações críticas precisas dos benefícios e desvantagens de estruturas e IDEs comuns é vital para transcender a “síndrome do objeto brilhante” e encontrar uma solução de longo prazo que possa fornecer a funcionalidade e o desempenho esperados.
Como uma avaliação crítica pode ser aplicada a uma ferramenta chave do desenvolvedor, como um IDE?
A Embarcadero abordou este desafio definindo uma metodologia de benchmarking usando o desenvolvimento de um aplicativo de calculadora para comparação entre Delphi, Windows Presentation Foundation (WPF) com .NET Framework e Electron. Os resultados apoiaram as conclusões sobre a produtividade, funcionalidade, flexibilidade e desempenho de cada estrutura, e essas conclusões foram publicadas em um white paper intitulado “ Descobrindo a melhor estrutura de desenvolvedor por meio de benchmarking “.
avaliação comparativa
Como uma estratégia de benchmarking para a comparação entre as três estruturas, os desenvolvedores participantes construíram um clone da calculadora “Padrão” do Windows 10. A intenção era testar o desempenho de cada estrutura com recompensa para um conjunto específico de métricas e permitir comparações lado a lado. As estruturas foram avaliadas com base em um conjunto de métricas que medem o desempenho em termos de produtividade do desenvolvedor, funcionalidade de negócios, flexibilidade do aplicativo da estrutura e desempenho do produto final.
Produtividade do desenvolvedor
A capacidade de uma estrutura de oferecer suporte à produtividade do desenvolvedor é uma medida de sua capacidade de acelerar o tempo necessário para levar um aplicativo ao mercado e sua influência nos custos de mão de obra de longo prazo. A produtividade do desenvolvedor, portanto, afeta diretamente a sustentabilidade e a lucratividade de um negócio. Uma métrica principal de produtividade é o tempo de desenvolvimento, ou em outras palavras, o número total de horas necessárias para escrever um aplicativo totalmente funcional do zero. Essa métrica é afetada pela utilidade das ferramentas de produtividade, documentação, bibliotecas, preenchimento de código e outras ferramentas de um framework que aceleram o desenvolvimento.
Vamos dar uma olhada em cada estrutura separadamente.
Delphi
Três desenvolvedores especialistas em Delphi completaram a Calculadora em uma média de 4,66 horas usando o RAD Studio. Um desenvolvedor usou seu código de calculadora Delphi e uma biblioteca de terceiros para criar uma calculadora Electron em 7 minutos, demonstrando a capacidade de reutilização do código do Delphi. O vídeo apresentado abaixo é um lapso de tempo da construção em Delphi.
WPF .NET Framework
Um desenvolvedor WPF especialista concluiu a Calculadora em 30 horas usando o Visual Studio. Outras 16 estimativas de WPF foram recebidas variando de 8 horas a 100 horas com uma média de 53 horas e uma moda de 80 horas. O vídeo apresentado abaixo é um lapso de tempo da construção em WPF.
Elétron
Um desenvolvedor especialista em elétrons concluiu a Calculadora em 10 horas usando Angular para a lógica da calculadora e Electron para a GUI. Outras oito estimativas de elétrons foram recebidas variando de 15 a 80 horas com média de 47 horas e moda de 20 horas. O vídeo apresentado abaixo é um lapso de tempo da construção em Electron.
Pontuações
Explore todas as métricas no white paper “Descobrindo a melhor estrutura de desenvolvedor por meio de benchmarking”:
Как работают Delphi, WPF .NET Framework и Electron по сравнению друг с другом и как лучше всего провести объективное сравнение? Embarcadero заказал технический документ для исследования различий между Delphi, WPF .NET Framework и Electron для создания настольных приложений Windows. Тестовое приложение — клон калькулятора Windows 10 — было воссоздано в каждой структуре тремя волонтерами Delphi Most Valuable Professionals (MVP), одним экспертом-фрилансером WPF-разработчиком и одним экспертом-фрилансером Electron. В этом сообщении блога мы собираемся изучить метрику времени разработки, которая является частью сравнения производительности, используемого в техническом документе.
Сегодня разработчики могут выбирать из множества доступных фреймворков, которые позволяют реализовать задачи разработки для различных платформ. Множество доступных решений для любого процесса временами может показаться препятствием, замаскированным под выгоду. Именно эта полнота может привести к путанице в отношении того, какой фреймворк лучше всего подходит для данной платформы или проекта, и требует использования системы рационального сравнения между фреймворками, IDE и инструментами. Точная критическая оценка преимуществ и недостатков распространенных фреймворков и IDE жизненно важна для преодоления «синдрома блестящего объекта» и поиска долгосрочного решения, которое может обеспечить ожидаемые функциональные возможности и производительность.
Как можно применить критическую оценку к ключевому инструменту разработчика, например к IDE?
Embarcadero подошел к этой задаче, определив методологию тестирования производительности с помощью разработки приложения-калькулятора для сравнения между Delphi, Windows Presentation Foundation (WPF) с .NET Framework и Electron. Результаты подтвердили выводы о производительности, функциональности, гибкости и производительности каждого фреймворка, и эти заключения были опубликованы в техническом документе, озаглавленном « Обнаружение лучшего фреймворка для разработчиков посредством сравнительного анализа ».
Бенчмаркинг
В качестве стратегии тестирования для сравнения трех платформ участвующие разработчики создали клон «стандартного» калькулятора Windows 10. Намерение состояло в том, чтобы протестировать производительность каждого фреймворка с вознаграждением за определенный набор показателей и позволить параллельное сравнение. Фреймворки оценивались на основе набора показателей, измеряющих производительность с точки зрения производительности разработчиков, бизнес-функциональности, гибкости приложений фреймворка и производительности конечного продукта.
Производительность разработчика
Способность фреймворка поддерживать продуктивность разработчиков — это мера его способности сократить время, необходимое для вывода приложения на рынок, и его влияние на долгосрочные затраты на рабочую силу. Таким образом, продуктивность разработчиков напрямую влияет на устойчивость и прибыльность бизнеса. Одним из основных показателей производительности является время разработки, или, другими словами, общее количество часов, необходимых для написания полнофункционального приложения с нуля. На эту метрику влияет полезность инструментов повышения производительности, документации, библиотек, автозавершения кода и других инструментов, ускоряющих разработку фреймворка.
Рассмотрим подробнее каждый фреймворк в отдельности.
Delphi
Три опытных разработчика Delphi завершили работу над калькулятором в среднем за 4,66 часа с помощью RAD Studio. Один разработчик использовал свой код калькулятора Delphi и стороннюю библиотеку для создания калькулятора Electron за 7 минут, демонстрируя возможность повторного использования кода Delphi. Представленное ниже видео представляет собой замедленную съемку в Delphi.
WPF .NET Framework
Один опытный разработчик WPF завершил работу над калькулятором за 30 часов с помощью Visual Studio. Было получено 16 других оценок WPF в диапазоне от 8 до 100 часов со средним значением 53 часа и режимом 80 часов. Видео, представленное ниже, представляет собой замедленную съемку в WPF.
Электрон
Один опытный разработчик Electron завершил работу над калькулятором за 10 часов, используя Angular для логики калькулятора и Electron для графического интерфейса. Было получено восемь других оценок Electron в диапазоне от 15 до 80 часов со средним значением 47 часов и режимом 20 часов. Видео, представленное ниже, представляет собой замедленную съемку в Electron.
Очки
Изучите все показатели в техническом документе «Обнаружение лучшей среды разработки с помощью сравнительного анализа»:
Как вы, возможно, видели в Интернете, Idera (материнская компания Embarcadero) недавно приобрела apilayer, компанию, предлагающую ряд микросервисов REST API, начиная от геолокации IP-адреса и заканчивая проверкой электронной почты и номера телефона, от финансовой информации до погоды и отслеживания рейсов. Вы можете найти список продуктов компании на https://apilayer.com/ . У каждого продукта есть собственный веб-сайт, на котором вы обычно можете войти в систему, используя ограниченное количество бесплатных вызовов REST.
В этом сообщении блога я не хочу обсуждать доступные службы, а сосредоточусь на том, как вызывать их из RAD Studio. Я буду использовать только две службы: бесплатный и открытый API стран REST и одну из их премиальных служб геолокации.
Использование API стран REST
Моя отправная точка для изучения сервисов apilayer — это простой и совершенно бесплатный сервис, доступный вместе с полной документацией на https://github.com/apilayer/restcountries . Чтобы провести первоначальный эксперимент, я использовал отладчик REST для запроса службы, используя конечную точку имени и, при необходимости, передав параметр, как вы можете видеть ниже (со значением параметра «объединено»):
После того, как данные в отладчике REST выглядят правильно, вы можете просто использовать кнопку «Копировать компоненты», чтобы сделать снимок конфигурации для компонента клиентской библиотеки REST, необходимого для создания приложения. Теперь создайте приложение Delphi или C ++ в RAD Studio, VCL или FMX, и вставьте компоненты в модуль данных — или форму, если вам лень. В этом случае я использовал Delphi и VCL… и мне было лень.
Я отбросил панель с редактированием, кнопкой и DBGrid (плюс компонент DataSource), подключил их и выполнил RESTRequest во время разработки, чтобы получить предварительный просмотр данных:
Код для фильтрации очень прост:
процедура TForm43.Button1Click (Отправитель: TObject);
начать,
если Edit1.Text = ”, то
RESTRequest1.Resource: = ‘name’
иначе
RESTRequest1.Resource: = ‘name /’ + Edit1.Text;
RESTRequest1.Execute;
конец;
Единственное другое изменение, которое мне пришлось сделать, — это изменить свойство TypesMode компонента RESTResponseDataSetAdapter1 на JSONOnly, так как часть анализа данных ошибочно пыталась преобразовать текст, связанный с часовым поясом, в дату, что привело к исключению.
Обратное геокодирование с помощью Position Stack API
Затем я попытался использовать бесплатный уровень платного API https://positionstack.com/ . Этот сервис предлагает прямое и обратное геокодирование, картографические сервисы и многое другое. Здесь, например, я выполнил простое «прямое» геокодирование, указав адрес и прочитав широту, долготу и другую локальную информацию в отладчике REST. Для получения табличного результата мне пришлось настроить корень JSON с помощью элемента данных :
В этом случае вам необходимо подписаться на ключ разработчика и ввести его в качестве дополнительного параметра запроса. (Возможно, вы захотите рассмотреть возможность кодирования этого ключа, а не использовать его в виде простой строки в конечном приложении.)
Теперь, аналогично тому, что я сделал выше, мы можем скопировать компоненты в приложение RAD Studio и получить готовое приложение FMX для геокодирования, на этот раз на основе Live Bindings. Также это приложение может отображать данные во время разработки, в данном случае широту и долготу моего города (плюс еще один город с таким же названием в США):
Заключение
Использование API-интерфейсов микросервисов для разработки приложений, как настольных, так и мобильных, действительно может ускорить создание и развертывание инновационных функций и одновременно разместить их в масштабируемой инфраструктуре. Я с нетерпением жду возможности использовать больше двухуровневых услуг и призываю вас взглянуть на то, что они могут предложить для ваших текущих или будущих проектов.
Como você deve ter visto online, a Idera (controladora da Embarcadero) adquiriu recentemente a apilayer, uma empresa que oferece uma série de microsserviços de API REST, que vão desde geolocalização de endereços IP a e-mails e verificações de números de telefone, de informações financeiras a clima e rastreamento de voos. Você pode encontrar uma lista dos produtos da empresa em https://apilayer.com/ . Cada produto tem seu próprio site, onde geralmente você pode entrar para uma conta com um número limitado de chamadas REST gratuitas.
Nesta postagem do blog, não quero discutir os serviços disponíveis, mas me concentrar em como chamá-los do RAD Studio. Usarei apenas dois serviços, a API de países REST gratuita e aberta e um de seus serviços premium de geolocalização.
Usando a API de países REST
Meu ponto de partida para explorar os serviços de uma camada é um serviço simples e totalmente gratuito, disponível junto com a documentação completa em https://github.com/apilayer/restcountries . Para fazer um experimento inicial, usei o REST Debugger para consultar o serviço, usando o nome endpoint e opcionalmente passando um parâmetro, como você pode ver abaixo (com o valor do parâmetro “united”):
Assim que os dados parecerem corretos no Depurador REST, você pode simplesmente usar o botão Copiar Componentes para fazer um instantâneo da configuração do componente da Biblioteca do Cliente REST necessário para construir um aplicativo. Agora, crie um aplicativo Delphi ou C ++ no RAD Studio, seja VCL ou FMX, e cole os componentes em um módulo de dados – ou um formulário se você for preguiçoso. Neste caso, usei Delphi e VCL … e fui preguiçoso.
Eu soltei um painel com uma edição e um botão e DBGrid (mais um componente DataSource), conectei-os e executei o RESTRequest em tempo de design para obter uma visualização dos dados:
A única outra alteração que tive de fazer foi alterar a propriedade TypesMode do componente RESTResponseDataSetAdapter1 para JSONOnly, já que parte da análise de dados estava tentando por engano converter algum texto relacionado ao fuso horário em uma data, resultando em uma exceção.
GeoCoding reverso com a API Position Stack
Em seguida, fiz uma tentativa de usar o nível gratuito de uma API paga, https://positionstack.com/ . Este serviço oferece geocodificação direta e inversa, serviços de mapeamento e muito mais. Aqui, por exemplo, fiz uma geocodificação “direta” simples, fornecendo um endereço e lendo a latitude, longitude e outras informações locais no Depurador REST. Para obter o resultado tabular, tive que configurar a raiz JSON usando o elemento de dados :
Nesse caso, você precisa se inscrever para uma chave de desenvolvedor e inseri-la como um parâmetro adicional para a solicitação. (Você pode querer considerar a codificação dessa chave em vez de tê-la como uma string simples no aplicativo final.)
Agora, de maneira semelhante ao que fiz acima, podemos copiar os componentes em um aplicativo RAD Studio e ter um aplicativo FMX de geocodificação pronto para uso, desta vez baseado em Live Bindings. Além disso, este aplicativo pode exibir dados em tempo de design, neste caso a latitude e longitude da minha cidade (mais outra cidade com o mesmo nome nos EUA):
Conclusão
Aproveitar APIs de microsserviço para desenvolvimento de aplicativos, tanto para desktop quanto para dispositivos móveis, pode realmente acelerar a construção e implantação de recursos inovadores e colocá-los em uma infraestrutura escalonável ao mesmo tempo. Estou ansioso para aproveitar mais os serviços de uma camada e incentivá-lo a dar uma olhada no que eles podem oferecer para seus projetos atuais ou futuros.
Como puede haber visto en línea, Idera (empresa matriz de Embarcadero) adquirió recientemente apilayer, una empresa que ofrece una serie de microservicios de API REST, que van desde la geolocalización de direcciones IP hasta la verificación de números de teléfono y correo electrónico, desde información financiera hasta seguimiento meteorológico y de vuelos. Puede encontrar una lista de los productos de la empresa en https://apilayer.com/ . Cada producto tiene su propio sitio web, donde generalmente puede iniciar sesión para obtener una cuenta con un número limitado de llamadas REST gratuitas.
En esta publicación de blog, no quiero discutir los servicios disponibles, pero me enfoco en cómo llamarlos desde RAD Studio. Usaré solo dos servicios, la API de países REST abierta y gratuita y uno de sus servicios premium de geolocalización.
Uso de la API de países REST
Mi punto de partida para explorar los servicios de apilayer es un servicio simple y totalmente gratuito, disponible junto con la documentación completa en https://github.com/apilayer/restcountries . Para hacer un experimento inicial, usé el depurador REST para consultar el servicio, usando el nombre del punto final y, opcionalmente, pasando un parámetro, como puede ver a continuación (con el valor del parámetro “unido”):
Una vez que los datos se vean correctos en el depurador REST, puede usar el botón Copiar componentes para hacer una instantánea de la configuración del componente de la biblioteca cliente REST requerido para construir una aplicación. Ahora, cree una aplicación Delphi o C ++ en RAD Studio, ya sea VCL o FMX, y pegue los componentes en un módulo de datos, o un formulario si es vago. En este caso he usado Delphi y VCL… y era vago.
Dejé caer un panel con una edición y un botón y DBGrid (más un componente DataSource), los conecté y ejecuté RESTRequest en tiempo de diseño para obtener una vista previa de los datos:
El único otro cambio que tuve que hacer fue cambiar la propiedad TypesMode del componente RESTResponseDataSetAdapter1 a JSONOnly, ya que algunos de los análisis de datos intentaban por error convertir un texto relacionado con la zona horaria en una fecha, lo que resultó en una excepción.
Geocodificación inversa con la API Position Stack
Luego intenté usar el nivel gratuito de una API paga, https://positionstack.com/ . Este servicio ofrece geocodificación directa e inversa, servicios de mapeo y más. Aquí, por ejemplo, he realizado una codificación geográfica “directa” simple, proporcionando una dirección y leyendo la latitud, la longitud y otra información local en el depurador REST. Para obtener el resultado tabular tuve que configurar la raíz JSON usando el elemento de datos :
En este caso, debe registrarse para obtener una clave de desarrollador e ingresarla como un parámetro adicional para la solicitud. (Es posible que desee considerar codificar esa clave en lugar de tenerla como una cadena simple en la aplicación final).
Ahora, de manera similar a lo que hice anteriormente, podemos copiar los componentes en una aplicación RAD Studio y tener una aplicación FMX de codificación geográfica lista para usar, esta vez basada en Live Bindings. Además, esta aplicación puede mostrar datos en tiempo de diseño, en este caso la latitud y longitud de mi ciudad (más otra ciudad con el mismo nombre en EE. UU.):
Conclusión
Aprovechar las API de microservicio para el desarrollo de aplicaciones, tanto de escritorio como móviles, realmente puede acelerar la creación y la implementación de funciones innovadoras, y tenerlas en una infraestructura escalable al mismo tiempo. Espero poder aprovechar más de los servicios de apilayer y le animo a que eche un vistazo a lo que pueden ofrecer para sus proyectos actuales o futuros.
Wie Sie online gesehen haben, Idera (Embarcadero Muttergesellschaft) vor kurzem erworbener apilayer, ein Unternehmen , eine Reihe von REST API Mikro-Dienstleistungen anbieten, die von IP – Adresse Geolocation – E – Mail und prüft die Telefonnummer, von Finanzinformationen zu Wetter und Flugverfolgung. Eine Liste der Unternehmensprodukte finden Sie unter https://apilayer.com/ . Jedes Produkt verfügt über eine eigene Website, auf der Sie sich in der Regel für ein Konto mit einer begrenzten Anzahl kostenloser REST-Anrufe anmelden können.
In diesem Blog-Beitrag möchte ich nicht auf die verfügbaren Dienste eingehen, sondern mich darauf konzentrieren, wie sie von RAD Studio aus aufgerufen werden. Ich werde nur zwei Dienste verwenden, die kostenlose und offene REST-Länder-API und einen ihrer Premium-Geolokalisierungsdienste.
Verwenden der API für REST-Länder
Mein Ausgangspunkt für die Erkundung der Apilayer-Dienste ist ein einfacher und völlig kostenloser Dienst, der zusammen mit der vollständigen Dokumentation unter https://github.com/apilayer/restcountries verfügbar ist . Um ein erstes Experiment durchzuführen, habe ich den REST-Debugger verwendet, um den Service abzufragen, den Namensendpunkt zu verwenden und optional einen Parameter zu übergeben, wie Sie unten sehen können (mit dem Parameterwert „united“):
Sobald die Daten im REST-Debugger korrekt angezeigt werden, können Sie einfach über die Schaltfläche Komponenten kopieren einen Schnappschuss der Konfiguration für die REST-Clientbibliothekskomponente erstellen, die zum Erstellen einer Anwendung erforderlich ist. Erstellen Sie jetzt eine Delphi- oder C ++ – Anwendung in RAD Studio, entweder VCL oder FMX, und fügen Sie die Komponenten in ein Datenmodul ein – oder in ein Formular, wenn Sie faul sind. In diesem Fall habe ich Delphi und VCL verwendet… und ich war faul.
Ich habe ein Panel mit einer Bearbeitung und einer Schaltfläche und DBGrid (plus einer DataSource-Komponente) gelöscht, sie verkabelt und die RESTRequest zur Entwurfszeit ausgeführt, um eine Vorschau der Daten zu erhalten:
Die einzige andere Änderung, die ich vornehmen musste, war die Änderung der TypesMode-Eigenschaft der RESTResponseDataSetAdapter1-Komponente in JSONOnly, da bei einigen Datenanalysen fälschlicherweise versucht wurde, zeitzonenbezogenen Text in ein Datum zu konvertieren, was zu einer Ausnahme führte.
Reverse GeoCoding mit der Position Stack API
Als nächstes habe ich versucht, die kostenlose Version einer kostenpflichtigen API ( https://positionstack.com/) zu verwenden . Dieser Dienst bietet direkte und inverse Geokodierung, Kartierungsdienste und mehr. Hier habe ich zum Beispiel eine einfache „Vorwärts“ -Geocodierung durchgeführt, eine Adresse angegeben und den Breiten-, Längen- und anderen lokalen Informationen im REST-Debugger gelesen. Um das tabellarische Ergebnis zu erhalten, musste ich den JSON-Stamm mit dem Datenelement konfigurieren :
In diesem Fall müssen Sie sich für einen Entwicklerschlüssel anmelden und diesen als zusätzlichen Parameter für die Anforderung eingeben. (Möglicherweise möchten Sie diesen Schlüssel codieren, anstatt ihn in der endgültigen Anwendung als einfache Zeichenfolge zu verwenden.)
Ähnlich wie oben können wir jetzt die Komponenten in eine RAD Studio-Anwendung kopieren und eine geokodierte FMX-App verwenden, diesmal basierend auf Live-Bindungen. Diese App kann auch Daten zur Entwurfszeit anzeigen, in diesem Fall den Breiten- und Längengrad meiner Stadt (plus eine andere Stadt mit demselben Namen in den USA):
Fazit
Durch die Nutzung von Micro-Service-APIs für die Anwendungsentwicklung, sowohl auf dem Desktop als auch auf dem Handy, kann der Aufbau und die Bereitstellung innovativer Funktionen erheblich beschleunigt und gleichzeitig auf einer skalierbaren Infrastruktur bereitgestellt werden. Ich freue mich darauf, mehr von den Apilayer-Diensten zu nutzen und Sie zu ermutigen, einen Blick darauf zu werfen, was sie für Ihre aktuellen zukünftigen Projekte bieten können.
In the previous articles in this series (Part 1, Part 2, and Part 3) we looked at problems with functions such as forEach(), map() or every() when used with async functions or promises, and developed some alternative versions that suited our purposes. Let’s finish now by considering a pair of tightly related methods: find() and findIndex() that also need a workaround.
Why do these methods fail? The reason is the same as for some() and every(), which we studied in Part 3 of this series. If we provide them an async function, a promise is returned every time, and promises being “truthy” objects, searches seem to always be successful!
Implementing find()
Let’s remember how array.find() works: basically, it loops through the whole array, looking for a value that satisfies a given function, and when found, looping stops and the value is produced as a result; if the loop gets to the end of the array without success, undefined is returned instead. And yes, this is a bit of a problem is you were looking for an undefined value yourself; how would you know if the search succeeded or not? Using findIndex() would solve that, though.
As in all the previous articles in this series, we’ll always code functions in two ways: as a method to be added to the Array.prototype (even if doing this is usually frowned upon…) and as a common function — you may use any of those.
Our search will have to work with two values: a found boolean attribute that will be true if the search succeeded, and a value attribute that will store the value that satisfied the function. We’ll initialize found to false, and value to undefined; if the search succeeds, we’ll change found to true, and also update value.
Logic is a bit harder — we pass around an object with the found and value attributes, and at the end, we pick out just the latter attribute.
To test this, we could again use our sample array (with values 1, 2, 3, 5, and 8) and see if we can find some value whose square equals 9 — and we’ll also have the async call if provided with 2 as its argument, as in previous examples. Code could be as follows.
If we run this, we get something like the following for both versions of the code.
The failure was ignored, and only three async calls were made; after finding that 3 squared equals 9, we got our final result. On the other hand, if we change the test to look for a number whose square is 99, we get an undefined result, after having made async calls for all the values in the array.
It seems our logic is working fine — and fortunately, implementing the other search method, findIndex(), will now be quite easy.
Implementing findIndex()
After having implemented findAsync() as above, producing an async-aware version of findIndex() is almost trivial — instead of working with a value attribute, we’ll have an index attribute that will be the position in the array of the found value, or -1 if the search failed.
We can test this in the same way we tested findAsync() and all you have to do is a quick “search and replace”, so we don’t really need to show the test code. The same successful search shown above would now produce something like the following.
The result is 2, corresponding to the position of value 3 in the array. We can also verify that an unsuccessful search (as shown earlier) will return -1.
So, it seems we got both versions of our searching logic to work — with little extra work!
Summary
In this last article, we finished with the study of dealing with async loops and promises in JavaScript, by developing our own versions of promise-aware code for find() and findIndex().
Even if some of the examples that we saw are probably not too likely, we were able to experiment with several of the latest features of Node.js, and we could also get practice with promises, especially when dealing with failures.
A final comment: do not take all the code we saw as definitive; there certainly are many more ways of doing what I did, and you could get clearer or more performing versions of our functions by going some other way — feel free to experiment, and do comment on your results!
References
This article is also partially based on Chapter 6, “Programming Declaratively — A Better Style” of my “Mastering JavaScript Functional Programming” book, for Packt; searching functions weren’t covered there.
Com o Change Views, você tem esse método patenteado para trabalhar que é muito mais fácil de desenvolver e escalar para sua base de clientes. Usando o InterBase Change Views, você agora pode identificar especificamente quais deltas foram alterados no nível do campo.
As visualizações de mudança são um modelo baseado em assinatura usado para assinar alguns dados e identificar quais dados foram alterados em seu banco de dados InterBase. Você cria uma assinatura que cobre diferentes tabelas e colunas e permite aos usuários o direito de assinar as alterações nos dados.
Durante uma conexão, você pode obter um alerta e, em seguida, buscar suas alterações delta ou usar um componente TFDEventAlerter no Delphi / C ++ Builder e até mesmo alterações específicas do código de cores. Para estender uma conexão, você não precisa estar conectado para registrar as alterações. Depois de iniciar uma transação de banco de dados, você pode ter uma assinatura ativa, desconectar-se do banco de dados e iniciar uma nova conexão com uma nova transação em um momento futuro.
Com o Change Views, você:
Reduza custos e E / S de disco, minimizando as sincronizações de dados
Têm pouco impacto no desempenho
Sem tabelas de log externas
Escalabilidade de usuários, mesmo os móveis
Acompanhe as mudanças da maneira que você quiser
Implementando Change Views com FireDAC
Confira este breve vídeo de 4 minutos sobre o uso de Alterar visualizações e como você pode começar a rastrear as alterações de dados.
Se quiser experimentar o uso de Change Views, você pode dar uma olhada no aplicativo de exemplo Generic Change Views que demonstrei acima, que vem com o Delphi.
Делает ли вас Delphi лучшим программистом? Код Object Pascal более читабелен?
Существует алгоритм с загадочной константой , прославившийся в коде Quake III Arena C Джона Кармака для быстрого вычисления обратного квадратного корня из 32-битного числа с плавающей запятой.
float Q_rsqrt( float number )
{
long i;
float x2, y;
const float threehalfs = 1.5F;
x2 = number * 0.5F;
y = number;
i = * ( long * ) &y; // evil floating point bit level hacking
i = 0x5f3759df - ( i >> 1 ); // what the f**k?
y = * ( float * ) &i;
y = y * ( threehalfs - ( x2 * y * y ) ); // 1st iteration
// y = y * ( threehalfs - ( x2 * y * y ) ); // 2nd iteration, this can be removed
return y;
}
Понимание Кодекса
Он основан на методе Ньютона для оценки корней. Кроме того, он преобразует число с плавающей запятой в целое, использует битовый сдвиг и начинается с приближения √2 ^ 127. Закомментированная строка позволяет провести дополнительную итерацию для улучшения оценки, которая не использовалась в Quate III Arena. Вы можете прочитать об этом больше в Википедии или посмотреть несколько видеороликов на YouTube по этой теме [включая очень глубокое погружение ]. Вот хорошее видео высокого уровня :
function rsqrt(const ANumber: Single): Single;
var
ResultAsInt: UInt32 absolute Result;
begin
Result := ANumber;
ResultAsInt := $5F3759DF - (ResultAsInt shr 1);
Result := Result * ( 1.5 - (ANumber * 0.5 * Result * Result)); // 1st iteration
// Result := Result * ( 1.5 - (ANumber * 0.5 * Result * Result)); // 2nd iteration, this can be removed
end;
Он использует ключевое слово absolute для сопоставления числа с плавающей запятой с целым числом, что позволяет избежать «злонамеренного взлома битового уровня с плавающей запятой». Это то, что мне нравится в Delphi и Object Pascal: он дает вам доступ к указателям, необработанной памяти и т.д., но не заставляет вас использовать его, когда вы не хотите / не нуждаетесь в нем. Более короткий код не всегда легче понять, просто взгляните на любое регулярное выражение, но это улучшение, потому что оно удаляет так много постороннего кода. Намного более читабельным.
Object Pascal настолько удобочитаем, что делает программистов лучше, поскольку делает их код более читаемым и поддерживаемым. Не поймите меня неправильно, вы можете писать спагетти на любом языке программирования / синтаксисе, но начать с удобочитаемого языка поможет. Вот почему существует так много «устаревших» программ Delphi: они успешны и удобны в обслуживании. Код, который не работает или не обслуживается, выбрасывается или переписывается.
Нужна более эффективная математика?
Erik van Bilsen Embarcadero MVP Co-Founder of Grijjy, inc. Author of neslib/FastMath
Если вы ищете более быстрые математические процедуры для Delphi, ознакомьтесь с высокопроизводительной библиотекой FastMath от Embarcadero MVP Эрика ван Бильсена из Grijjy.
FastMath — быстрая математическая библиотека для Delphi
FastMath — это математическая библиотека Delphi, оптимизированная для обеспечения высокой производительности (иногда за счет невыполнения проверки ошибок или небольшой потери точности).
Это делает FastMath идеальным решением для высокопроизводительных математических приложений, таких как мультимедийные приложения и игры. Для еще большей производительности библиотека предоставляет множество «приблизительных» функций (все они начинаются с Fast-префикса). Это может быть очень быстро, но вы потеряете некоторую (иногда удивительно небольшую) точность. Для игр и анимации эта потеря точности обычно вполне приемлема и компенсируется увеличением скорости. Но не используйте их для научных расчетов …
O Delphi torna você um programador melhor? O código Object Pascal é mais legível?
Há um algoritmo com uma constante misteriosa que ganhou fama no código Quake III Arena C de John Carmack por estimar rapidamente a raiz quadrada inversa de um número de ponto flutuante de 32 bits.
float Q_rsqrt( float number )
{
long i;
float x2, y;
const float threehalfs = 1.5F;
x2 = number * 0.5F;
y = number;
i = * ( long * ) &y; // evil floating point bit level hacking
i = 0x5f3759df - ( i >> 1 ); // what the f**k?
y = * ( float * ) &i;
y = y * ( threehalfs - ( x2 * y * y ) ); // 1st iteration
// y = y * ( threehalfs - ( x2 * y * y ) ); // 2nd iteration, this can be removed
return y;
}
Compreender o Código
É baseado no método de Newton para estimar raízes. Além disso, ele converte o número de ponto flutuante em um inteiro, usa deslocamento de bits e começa com uma aproximação de √2 ^ 127. A linha comentada permite uma iteração adicional para melhorar a estimativa, que não foi utilizada no Quate III Arena. Você pode ler mais sobre isso na Wikipedia ou assistir a alguns vídeos do YouTube sobre o assunto [incluindo um mergulho muito profundo ]. Aqui está um bom vídeo de alto nível :
A Delphi pode torná-lo melhor?
O usuário do Facebook Toon Krijthe mostrou como o código seria muito mais claro e simples se fosse implementado em Object Pascal / Delphi.
function rsqrt(const ANumber: Single): Single;
var
ResultAsInt: UInt32 absolute Result;
begin
Result := ANumber;
ResultAsInt := $5F3759DF - (ResultAsInt shr 1);
Result := Result * ( 1.5 - (ANumber * 0.5 * Result * Result)); // 1st iteration
// Result := Result * ( 1.5 - (ANumber * 0.5 * Result * Result)); // 2nd iteration, this can be removed
end;
Ele faz uso da palavra-chave absoluta para mapear o número de ponto flutuante para o inteiro, o que evita todos os “hackers de nível de bit de ponto flutuante maligno”. Isso é algo que adoro no Delphi e no Object Pascal: ele dá acesso a ponteiros, memória bruta etc., mas não o força a usá-lo quando não quiser / precisar. O código mais curto nem sempre é mais fácil de entender, basta dar uma olhada em qualquer expressão regular, mas isso é uma melhoria porque remove muitos códigos estranhos. Muito mais legível.
“Qualquer idiota pode escrever um código que um computador possa entender. Bons programadores escrevem códigos que humanos podem entender”
Martin Fowler
Software Engineer Author of “Refactoring”
O Object Pascal é tão legível que torna os programadores melhores, pois torna seu código mais legível e sustentável. Não me entenda mal, você pode escrever spaghetti em qualquer linguagem / sintaxe de programação, mas começar com uma legível ajuda. É por isso que existem tantos programas Delphi “Legados”: Eles são bem-sucedidos e podem ser mantidos. Código que não funciona ou não pode ser mantido é descartado ou reescrito.
Precisa de mais matemática de alto desempenho?
Erik van Bilsen Embarcadero MVP Co-Founder of Grijjy, inc. Author of neslib/FastMath
Se você está procurando rotinas matemáticas mais rápidas para Delphi, verifique a Biblioteca de High-Performance FastMath do Embarcadero MVP Erik van Bilsen, famoso por Grijjy.
FastMath – Biblioteca Fast Math para Delphi
FastMath é uma biblioteca matemática Delphi otimizada para desempenho rápido (às vezes ao custo de não realizar a verificação de erros ou perder um pouco de precisão).
Isso torna o FastMath ideal para aplicativos de alto desempenho com uso intensivo de matemática, como aplicativos de multimídia e jogos. Para um desempenho ainda melhor, a biblioteca oferece uma variedade de funções “aproximadas” (todas começam com um Fastprefixo). Isso pode ser muito rápido, mas você perderá alguma precisão (às vezes surpreendentemente pequena). Para jogos e animação, essa perda de precisão é geralmente perfeitamente aceitável e compensada pelo aumento na velocidade. Não os use para cálculos científicos …
¿Delphi te convierte en un mejor programador? ¿El código Object Pascal es más legible?
Hay un algoritmo con una constante misteriosa que saltó a la fama en el código Quake III Arena C de John Carmack para estimar rápidamente la raíz cuadrada inversa de un número de coma flotante de 32 bits.
float Q_rsqrt( float number )
{
long i;
float x2, y;
const float threehalfs = 1.5F;
x2 = number * 0.5F;
y = number;
i = * ( long * ) &y; // evil floating point bit level hacking
i = 0x5f3759df - ( i >> 1 ); // what the f**k?
y = * ( float * ) &i;
y = y * ( threehalfs - ( x2 * y * y ) ); // 1st iteration
// y = y * ( threehalfs - ( x2 * y * y ) ); // 2nd iteration, this can be removed
return y;
}
Entendiendo el Código
Se basa en el método de Newton para estimar raíces. Además, convierte el número de punto flotante en un entero, usa el desplazamiento de bits y comienza con una aproximación de √2 ^ 127. La línea comentada permite una iteración adicional para mejorar la estimación, que no se usó en Quate III Arena. Puede leer más sobre esto en Wikipedia o ver algunos videos de YouTube sobre el tema [incluido un análisis muy profundo ]. Aquí hay un buen video de alto nivel :
function rsqrt(const ANumber: Single): Single;
var
ResultAsInt: UInt32 absolute Result;
begin
Result := ANumber;
ResultAsInt := $5F3759DF - (ResultAsInt shr 1);
Result := Result * ( 1.5 - (ANumber * 0.5 * Result * Result)); // 1st iteration
// Result := Result * ( 1.5 - (ANumber * 0.5 * Result * Result)); // 2nd iteration, this can be removed
end;
Hace uso de la palabra clave absoluta para asignar el número de punto flotante al entero, lo que evita todo el “hackeo maligno de nivel de bits de punto flotante”. Esto es algo que me encanta de Delphi y Object Pascal: te da acceso a punteros, memoria bruta, etc. pero no te obliga a usarlo cuando no quieres o necesitas. El código más corto no siempre es más fácil de entender, solo eche un vistazo a cualquier expresión regular, pero esto es una mejora porque elimina mucho código extraño. Mucho más legible.
“Cualquier tonto puede escribir un código que una computadora pueda entender. Los buenos programadores escriben código que los humanos pueden entender “.
Martin Fowler
Software Engineer Author of “Refactoring”
Object Pascal es tan legible que mejora a los programadores ya que hace que su código sea más legible y fácil de mantener. No me malinterpretes, puedes escribir espaguetis en cualquier lenguaje / sintaxis de programación, pero comenzar con uno legible ayuda. Ésta es la razón por la que existen tantos programas Delphi “heredados”: son exitosos y se pueden mantener. El código que no funciona o que no se puede mantener se descarta o se reescribe.
¿Necesita más matemáticas de alto rendimiento?
Erik van Bilsen Embarcadero MVP Co-Founder of Grijjy, inc. Author of neslib/FastMath
Si está buscando rutinas matemáticas más rápidas para Delphi, consulte la biblioteca FastMath de alto rendimiento del MVP de Embarcadero Erik van Bilsen de Grijjy.
FastMath: biblioteca matemática rápida para Delphi
FastMath es una biblioteca matemática de Delphi que está optimizada para un rendimiento rápido (a veces a costa de no realizar la verificación de errores o perder un poco de precisión).
Esto hace que FastMath sea ideal para aplicaciones de alto rendimiento con uso intensivo de matemáticas, como aplicaciones y juegos multimedia. Para un rendimiento aún mejor, la biblioteca proporciona una variedad de funciones “aproximadas” (todas comienzan con un Fastprefijo). Pueden ser muy rápidos, pero perderá algo de precisión (a veces sorprendentemente poca). Para los juegos y la animación, esta pérdida de precisión suele ser perfectamente aceptable y se ve compensada por el aumento de la velocidad. Sin embargo, no los use para cálculos científicos …
Macht Delphi Sie zu einem besseren Programmierer? Ist Object Pascal-Code besser lesbar?
Es gibt einen Algorithmus mit einer Mystery-Konstante , der in John Carmacks Quake III Arena C-Code berühmt wurde, um die inverse Quadratwurzel einer 32-Bit-Gleitkommazahl schnell abzuschätzen.
float Q_rsqrt( float number )
{
long i;
float x2, y;
const float threehalfs = 1.5F;
x2 = number * 0.5F;
y = number;
i = * ( long * ) &y; // evil floating point bit level hacking
i = 0x5f3759df - ( i >> 1 ); // what the f**k?
y = * ( float * ) &i;
y = y * ( threehalfs - ( x2 * y * y ) ); // 1st iteration
// y = y * ( threehalfs - ( x2 * y * y ) ); // 2nd iteration, this can be removed
return y;
}
Den Code verstehen
Es basiert auf Newtons Methode zur Schätzung von Wurzeln. Außerdem konvertiert es die Gleitkommazahl in eine Ganzzahl, verwendet die Bitverschiebung und beginnt mit einer Näherung von √2 ^ 127. Die auskommentierte Zeile ermöglicht eine zusätzliche Iteration zur Verbesserung der Schätzung, die in der Quate III Arena nicht verwendet wurde. Sie können mehr darüber in Wikipedia lesen oder sich einige YouTube-Videos zum Thema ansehen [einschließlich eines sehr tiefen Tauchgangs ]. Hier ist ein schönes High-Level-Video :
Kann Delphi es besser machen?
Der Facebook-Nutzer Toon Krijthe zeigte, wie viel klarer und einfacher der Code wäre, wenn er in Object Pascal / Delphi implementiert würde.
function rsqrt(const ANumber: Single): Single;
var
ResultAsInt: UInt32 absolute Result;
begin
Result := ANumber;
ResultAsInt := $5F3759DF - (ResultAsInt shr 1);
Result := Result * ( 1.5 - (ANumber * 0.5 * Result * Result)); // 1st iteration
// Result := Result * ( 1.5 - (ANumber * 0.5 * Result * Result)); // 2nd iteration, this can be removed
end;
Es verwendet das absolute Schlüsselwort, um die Gleitkommazahl der Ganzzahl zuzuordnen, wodurch alle „bösen Gleitkomma-Bit-Level-Hacking“ vermieden werden . Dies ist etwas, das ich an Delphi und Object Pascal liebe: Es gibt Ihnen Zugriff auf Zeiger, Rohspeicher usw., zwingt Sie jedoch nicht dazu, es zu verwenden, wenn Sie es nicht wollen / müssen. Kürzerer Code ist nicht immer einfacher zu verstehen. Sehen Sie sich nur einen regulären Ausdruck an. Dies ist jedoch eine Verbesserung, da so viel überflüssiger Code entfernt wird. Viel besser lesbar.
„Jeder Dummkopf kann Code schreiben, den ein Computer verstehen kann. Gute Programmierer schreiben Code, den Menschen verstehen können. “
Martin Fowler
Software Engineer Author of „Refactoring“
Object Pascal ist so lesbar, dass Programmierer besser werden, da ihr Code besser lesbar und wartbar ist. Versteh mich nicht falsch, du kannst Spaghetti in jeder Programmiersprache / Syntax schreiben, aber es hilft, mit einer lesbaren zu beginnen. Aus diesem Grund gibt es so viele „Legacy“ -Delphi-Programme: Sie sind erfolgreich und wartbar. Code, der nicht funktioniert oder nicht gewartet werden kann, wird verworfen oder neu geschrieben.
Benötigen Sie mehr Hochleistungsmathematik?
Erik van Bilsen Embarcadero MVP Co-Founder of Grijjy, inc. Author of neslib/FastMath
Wenn Sie nach schnelleren mathematischen Routinen für Delphi suchen, besuchen Sie die Hochleistungs-FastMath-Bibliothek von Embarcadero MVP Erik van Bilsen von Grijjy.
FastMath – Schnelle Mathematikbibliothek für Delphi
FastMath ist eine Delphi-Mathematikbibliothek, die für eine schnelle Leistung optimiert ist (manchmal auf Kosten einer fehlenden Fehlerprüfung oder eines geringen Genauigkeitsverlusts).
Dies macht FastMath ideal für leistungsstarke mathematikintensive Anwendungen wie Multimedia-Anwendungen und Spiele. Für eine noch bessere Leistung bietet die Bibliothek eine Vielzahl von „ungefähren“ Funktionen (die alle mit einem FastPräfix beginnen). Diese können sehr schnell sein, aber Sie verlieren etwas (manchmal überraschend wenig) Genauigkeit. Bei Spielen und Animationen ist dieser Genauigkeitsverlust normalerweise durchaus akzeptabel und wird durch die Geschwindigkeitssteigerung aufgewogen. Verwenden Sie sie jedoch nicht für wissenschaftliche Berechnungen…
GetIt Package Manager — это менеджер пакетов, встроенный в RAD Studio IDE, который позволяет вам просматривать, загружать, покупать и устанавливать пакеты. Пакеты могут содержать библиотеки, компоненты, расширения IDE и SDK. Пакеты, доступные в диспетчере пакетов, можно просмотреть на сайте Embarcadero GetIt и загрузить в среде IDE или через командную строку. Кроме того, последний список новых пакетов, добавленных в диспетчер пакетов GetIt, доступен через RSS-канал .
Цель GetIt — помочь клиентам найти ценные библиотеки и легко установить их в среде IDE, а также упростить миграцию с одной версии RAD Studio на другую, обеспечивая доступность этих библиотек после выпуска, чтобы существующий проект мог быть легко перенесенным после быстрой загрузки необходимых библиотек.
Вы можете установить пакеты из GetIt через командную строку: GetItCmd
GetIt Package Manager - Version 7.0
Copyright c 2019 Embarcadero Technologies, Inc. All Rights Reserved.
Usage: GetItCmd []:
-install or -i
Install Item[s] separated with ';'
-uninstall or -u
Uninstall Item[s] separated with ';'
-user_name:
User name for proxies with required authentication.
-password:
Password for proxies with required authentication.
-accept_eulas
The user accepts EULA[s] of downloaded package[s].
-verb:[Quiet/Minimal/Normal/Detailed]
Specifies the verbose level for console output messages.
-listavailable:[Filter by substring]
List all avilable packages from package source.
Options:
-filter:[All/Free/Acquired/Installed]. Default[Installed].
-sort:[Name/Vendor]. Default[Name].
-r Custom registry subkey for saving.
В декабре 2020 года в Embarcadero GetIt был добавлен ряд новых или обновленных библиотек. Взгляните!
Вкладки TChrome — это комплексная реализация системы вкладок Google Chrome — функции включают — полностью анимированные вкладки, перетаскивание, автоматическое изменение размера и позиционирования, текст справа налево, настраиваемые формы вкладок, полную демонстрацию и многое другое.
29 декабря 2020 г. Общественная лицензия Mozilla 1.1 (MPL 1.1)
ICS — это библиотека Delphi, состоящая из множества интернет-компонентов, поддерживающих все основные протоколы и приложения. Все компоненты управляются событиями и не блокируются, с блокирующими версиями для более простых приложений. Включает OpenSSL 1.1.1i.
23 дек.2020 г. Бесплатное ПО, защищенное авторским правом
ICS — это библиотека Delphi, состоящая из множества интернет-компонентов, поддерживающих все основные протоколы и приложения. Все компоненты управляются событиями и не блокируются, с блокирующими версиями для более простых приложений. Включает OpenSSL 1.1.1i.
23 дек.2020 г. Бесплатное ПО, защищенное авторским правом
SynEdit для Delphi и CBuilder. Элемент управления редактирования выделения синтаксиса, не основанный на общих элементах управления Windows. Поддерживается на платформах: Windows.
Электронные книги EWriter — это современная альтернатива устаревшему формату CHM для справки по локальным приложениям. Они предлагают полную поддержку контекстно-зависимой справки и ссылок на файлы. Они сочетают в себе преимущества CHM и WebHelp и устраняют недостатки обоих. Пакет включает модуль Vcl.EwriterHelpViewer.pas, который реализует поддержку формата справки eWriter в справочной системе Delphi. Приложения, которые в настоящее время используют файлы CHM для справки, могут переключиться на справку eWriter почти без изменений.
21 декабря 2020 г. Стандартная общественная лицензия GNU (GPL)
Библиотека криптографии, содержащая хэш-алгоритмы, криптографический генератор псевдослучайных чисел и классы преобразования CRC и формата, а также демонстрационные проекты и обширную документацию.
Современные приложения имеют несколько потоков, и этот плагин позволяет выполнять отладку параллельно: так же, как работает ваш код! Просматривайте несколько стеков вызовов, шагайте или запускайте каждый поток вместо всего процесса, смотрите выполнение нескольких потоков прямо в редакторе кода — и многое другое.
Эксперт RAD Studio для создания установщиков NSIS и Inno Setup из среды IDE. Он интегрирует NSIS (Nullsoft Scriptable Install System) и Inno Setup с IDE и позволяет вам создавать и строить проекты NSIS и Inno Setup (установщики) прямо в RAD Studio, получая все преимущества общей интегрированной среды!
Генератор отчетов FastReport FMX — это современное решение для интеграции Business Intelligence в ваше программное обеспечение. Он создан для разработчиков, которые хотят использовать готовые компоненты для отчетности. FastReport FMX, благодаря простоте использования, удобству и небольшому размеру дистрибутива, способен обеспечить высокую функциональность и производительность практически на любом современном ПК.
AQtime — это интегрированный набор инструментов профилировщика, который помогает находить узкие места в производительности, а также утечки памяти и ресурсов в ваших приложениях и легко устранять их. С помощью AQtime Standard для Embarcadero RAD Studio вы можете профилировать 32-разрядные приложения с собственным кодом, созданные с помощью Embarcadero RAD Studio XE6 — XE8, 10, 10.1.
RVMedia — это набор компонентов для отображения видео из различных источников, управления IP-камерами, организации видеочатов, записи аудио и видео файлов.
RVMedia — это набор компонентов для отображения видео из различных источников, управления IP-камерами, организации видеочатов, записи аудио и видео файлов (платформа Windows).
O GetIt Package Manager é o gerenciador de pacotes integrado ao RAD Studio IDE que permite navegar, fazer download, comprar e instalar pacotes. Os pacotes podem fornecer bibliotecas, componentes, extensões IDE e SDKs. Os pacotes disponíveis no gerenciador de pacotes podem ser navegados no site Embarcadero GetIt e baixados no IDE ou via linha de comando. Além disso, a lista mais recente de novos pacotes adicionados ao Gerenciador de Pacotes GetIt está disponível via RSS feed .
O objetivo do GetIt é ajudar os clientes a descobrir bibliotecas valiosas e instalá-las facilmente no IDE, além de ajudar a simplificar a migração de uma versão do RAD Studio para a próxima, garantindo que essas bibliotecas estejam disponíveis na liberação para que um projeto existente possa ser facilmente migrado após um download rápido das bibliotecas necessárias.
Você pode instalar pacotes do GetIt por meio da linha de comando: GetItCmd
GetIt Package Manager - Version 7.0
Copyright c 2019 Embarcadero Technologies, Inc. All Rights Reserved.
Usage: GetItCmd []:
-install or -i
Install Item[s] separated with ';'
-uninstall or -u
Uninstall Item[s] separated with ';'
-user_name:
User name for proxies with required authentication.
-password:
Password for proxies with required authentication.
-accept_eulas
The user accepts EULA[s] of downloaded package[s].
-verb:[Quiet/Minimal/Normal/Detailed]
Specifies the verbose level for console output messages.
-listavailable:[Filter by substring]
List all avilable packages from package source.
Options:
-filter:[All/Free/Acquired/Installed]. Default[Installed].
-sort:[Name/Vendor]. Default[Name].
-r Custom registry subkey for saving.
Uma série de bibliotecas novas ou atualizadas foram adicionadas ao Embarcadero GetIt em dezembro de 2020. Dê uma olhada!
TChrome tabs é uma implementação abrangente do sistema de guias do Google Chrome – os recursos incluem – guias totalmente animadas, arrastar e soltar, redimensionamento e posicionamento automáticos, texto da direita para a esquerda, formatos de guia personalizados, demonstração completa e muito mais.
29 de dezembro de 2020 Mozilla Public License 1.1 (MPL 1.1)
ICS é uma biblioteca Delphi composta de muitos componentes da Internet que suportam todos os principais protocolos e aplicativos. Todos os componentes são orientados por eventos e não bloqueiam, com versões bloqueadoras para aplicativos mais simples. Inclui OpenSSL 1.1.1i.
23 de dezembro de 2020 freeware com direitos autorais
ICS é uma biblioteca Delphi composta de muitos componentes da Internet que suportam todos os principais protocolos e aplicativos. Todos os componentes são orientados por eventos e não bloqueiam, com versões bloqueadoras para aplicativos mais simples. Inclui OpenSSL 1.1.1i.
23 de dezembro de 2020 freeware com direitos autorais
SynEdit para Delphi e CBuilder. Controle de edição de realce de sintaxe, não baseado nos controles comuns do Windows. Suportado em plataformas: Windows.
EWriter eBooks são a alternativa moderna para o formato CHM obsoleto para ajuda de aplicativo local. Eles oferecem suporte completo para ajuda sensível ao contexto e links de arquivos. Eles combinam os benefícios do CHM e do WebHelp e eliminam as desvantagens de ambos. O pacote inclui a unidade Vcl.EwriterHelpViewer.pas que implementa suporte para o formato de ajuda eWriter no sistema de ajuda do Delphi. Os aplicativos que atualmente usam arquivos CHM para obter ajuda do aplicativo podem mudar para a ajuda do eWriter quase sem alterações.
21 de dezembro de 2020 GNU General Public License (GPL)
Biblioteca de criptografia contendo algoritmos de hash, um gerador de números pseudo-aleatórios criptográficos e CRC e classes de conversão de formato, juntamente com projetos de demonstração e extensa documentação.
Os aplicativos modernos têm vários threads e este plugin permite que você depure em paralelo: da mesma forma que seu código é executado! Veja várias pilhas de chamadas, avance ou execute cada thread em vez de todo o processo, veja a execução de várias threads diretamente no editor de código – e muito mais.
Especialista do RAD Studio para criar instaladores NSIS e Inno Setup a partir do IDE. Ele integra NSIS (Nullsoft Scriptable Install System) e Inno Setup com o IDE e permite que você crie e construa projetos NSIS e Inno Setup (instaladores) diretamente no RAD Studio, obtendo todos os benefícios do ambiente integrado comum!
Gerador de relatórios FastReport FMX é uma solução moderna para integração de Business Intelligence em seu software. Ele foi criado para desenvolvedores que desejam usar componentes prontos para relatórios. O FastReport FMX, com sua simplicidade de uso, conveniência e pequeno tamanho de distribuição, é capaz de fornecer alta funcionalidade e desempenho em quase todos os PCs modernos.
AQtime é um kit de ferramentas de criação de perfil integrado que ajuda a encontrar gargalos de desempenho e vazamentos de memória e recursos em seus aplicativos e eliminá-los facilmente. Com o AQtime Standard para Embarcadero RAD Studio, você pode criar perfis de aplicativos de código nativo de 32 bits criados com Embarcadero RAD Studio XE6 – XE8, 10, 10.1.
RVMedia é um conjunto de componentes para exibição de vídeo de várias fontes, controle de câmeras IP, organização de chats de vídeo, gravação de arquivos de áudio e vídeo.
RVMedia é um conjunto de componentes para exibição de vídeo de várias fontes, controle de câmeras IP, organização de chats de vídeo, gravação de arquivos de áudio e vídeo (plataforma Windows).
GetIt Package Manager es el administrador de paquetes integrado en RAD Studio IDE que le permite navegar, descargar, comprar e instalar paquetes. Los paquetes pueden proporcionar bibliotecas, componentes, extensiones IDE y SDK. Los paquetes disponibles en el administrador de paquetes se pueden navegar en el sitio de Embarcadero GetIt y descargar en el IDE o mediante una línea de comando. Además, la lista más reciente de paquetes nuevos agregados al Administrador de paquetes GetIt está disponible a través de RSS .
El objetivo de GetIt es ayudar a los clientes a descubrir bibliotecas valiosas e instalarlas fácilmente en el IDE, además está destinado a ayudar a simplificar la migración de una versión de RAD Studio a la siguiente, asegurándose de que esas bibliotecas estén disponibles en el momento del lanzamiento para que un proyecto existente pueda ser migrado fácilmente después de una descarga rápida de las bibliotecas requeridas.
Puede instalar paquetes desde GetIt a través de la línea de comando: GetItCmd
GetIt Package Manager - Version 7.0
Copyright c 2019 Embarcadero Technologies, Inc. All Rights Reserved.
Usage: GetItCmd []:
-install or -i
Install Item[s] separated with ';'
-uninstall or -u
Uninstall Item[s] separated with ';'
-user_name:
User name for proxies with required authentication.
-password:
Password for proxies with required authentication.
-accept_eulas
The user accepts EULA[s] of downloaded package[s].
-verb:[Quiet/Minimal/Normal/Detailed]
Specifies the verbose level for console output messages.
-listavailable:[Filter by substring]
List all avilable packages from package source.
Options:
-filter:[All/Free/Acquired/Installed]. Default[Installed].
-sort:[Name/Vendor]. Default[Name].
-r Custom registry subkey for saving.
Se agregaron varias bibliotecas nuevas o actualizadas a Embarcadero GetIt en diciembre de 2020. ¡Eche un vistazo!
TChrome tabs es una implementación completa del sistema de pestañas de Google Chrome; las características incluyen pestañas totalmente animadas, arrastrar y soltar, cambio de tamaño y posicionamiento automático, texto de derecha a izquierda, formas de pestañas personalizadas, demostración completa y mucho más.
29 de diciembre de 2020 Licencia pública de Mozilla 1.1 (MPL 1.1)
ICS es una biblioteca de Delphi compuesta por muchos componentes de Internet que admiten todos los protocolos y aplicaciones principales. Todos los componentes son controlados por eventos y sin bloqueo, con versiones de bloqueo para aplicaciones más simples. Incluye OpenSSL 1.1.1i.
23 de diciembre de 2020 Freeware con derechos de autor
ICS es una biblioteca de Delphi compuesta por muchos componentes de Internet que admiten todos los protocolos y aplicaciones principales. Todos los componentes son controlados por eventos y sin bloqueo, con versiones de bloqueo para aplicaciones más simples. Incluye OpenSSL 1.1.1i.
23 de diciembre de 2020 Freeware con derechos de autor
SynEdit para Delphi y CBuilder. Control de edición de resaltado de sintaxis, no basado en los controles comunes de Windows. Compatible con plataformas: Windows.
Los libros electrónicos EWriter son la alternativa moderna al formato CHM obsoleto para la ayuda de aplicaciones locales. Ofrecen soporte completo para ayuda contextual y enlaces a archivos. Combinan los beneficios de CHM y WebHelp y eliminan las desventajas de ambos. El paquete incluye la unidad Vcl.EwriterHelpViewer.pas que implementa el soporte para el formato de ayuda de eWriter en el sistema de ayuda de Delphi. Las aplicaciones que actualmente usan archivos CHM para ayuda de aplicaciones pueden cambiar a la ayuda de eWriter casi sin cambios.
21 de diciembre de 2020 Licencia pública general GNU (GPL)
EWriter eBooks are the modern alternative to the obsolete CHM format for local application help. They offer full support for context-sensitive help and file links. They combine the benefits of CHM and WebHelp and eliminate the disadvantages of both. The package includes the unit Vcl.EwriterHelpViewer.pas which implements support for the eWriter help format in Delphi’s help system. Applications currently using CHM files for application help can switch to eWriter help almost without changes.
Biblioteca de criptografía que contiene algoritmos hash, un generador criptográfico de números pseudoaleatorios y clases de conversión de formato y CRC junto con proyectos de demostración y una amplia documentación.
Las aplicaciones modernas tienen varios subprocesos, y este complemento le permite depurar en paralelo: ¡de la misma manera que se ejecuta su código! Vea múltiples pilas de llamadas, recorra o ejecute cada hilo en lugar de todo el proceso, vea la ejecución de varios hilos directamente en el editor de código, y más.
Experto en RAD Studio para crear instaladores de NSIS e Inno Setup desde el IDE. ¡Integra NSIS (Nullsoft Scriptable Install System) e Inno Setup con el IDE y le permite crear y construir proyectos NSIS e Inno Setup (instaladores) directamente dentro de RAD Studio obteniendo todos los beneficios del entorno integrado común!
Generador de informes FastReport FMX es una solución moderna para integrar Business Intelligence en su software. Ha sido creado para desarrolladores que desean utilizar componentes listos para usar para generar informes. FastReport FMX, con su simplicidad de uso, conveniencia y pequeño tamaño de distribución, es capaz de proporcionar alta funcionalidad y rendimiento en casi cualquier PC moderna.
AQtime es un kit de herramientas de perfilado integrado que le ayuda a encontrar cuellos de botella en el rendimiento y fugas de memoria y recursos en sus aplicaciones y eliminarlos fácilmente. Con AQtime Standard para Embarcadero RAD Studio, puede perfilar aplicaciones de código nativo de 32 bits creadas con Embarcadero RAD Studio XE6 – XE8, 10, 10.1.
RVMedia es un conjunto de componentes para mostrar video de varias fuentes, controlar cámaras IP, organizar chats de video, grabar archivos de audio y video.
RVMedia es un conjunto de componentes para mostrar video de varias fuentes, controlar cámaras IP, organizar chats de video, grabar archivos de audio y video (plataforma Windows).
Abbrevia es un kit de herramientas de compresión para Delphi, C ++ Builder, Kylix y Free Pascal. Compatible con plataformas: Windows, Android, OS X, iOS.
1 de diciembre de 2020 LICENCIA PÚBLICA DE MOZILLA
Abbrevia es un kit de herramientas de compresión para Delphi, C ++ Builder, Kylix y Free Pascal. Compatible con plataformas: Windows, Android, OS X, iOS.
1 de diciembre de 2020 LICENCIA PÚBLICA DE MOZILLA
Der GetIt-Paketmanager ist der in die RAD Studio-IDE integrierte Paketmanager, mit dem Sie Pakete durchsuchen, herunterladen, kaufen und installieren können. Pakete können Bibliotheken, Komponenten, IDE-Erweiterungen und SDKs bereitstellen. Im Paketmanager verfügbare Pakete können auf der Embarcadero GetIt- Site durchsucht und in der IDE oder über eine Befehlszeile heruntergeladen werden. Darüber hinaus ist die neueste Liste der neuen Pakete, die dem GetIt Package Manager hinzugefügt wurden, per RSS-Feed verfügbar .
Das Ziel von GetIt ist es, Kunden dabei zu helfen, wertvolle Bibliotheken zu entdecken und sie einfach in der IDE zu installieren. Außerdem soll es die Migration von einer Version von RAD Studio zur nächsten vereinfachen und sicherstellen, dass diese Bibliotheken bei der Veröffentlichung verfügbar sind, damit ein vorhandenes Projekt dies kann leicht nach einem schnellen Download der erforderlichen Bibliotheken migriert werden.
Sie können Pakete von GetIt über die folgende Befehlszeile installieren: GetItCmd
GetIt Package Manager - Version 7.0
Copyright c 2019 Embarcadero Technologies, Inc. All Rights Reserved.
Usage: GetItCmd []:
-install or -i
Install Item[s] separated with ';'
-uninstall or -u
Uninstall Item[s] separated with ';'
-user_name:
User name for proxies with required authentication.
-password:
Password for proxies with required authentication.
-accept_eulas
The user accepts EULA[s] of downloaded package[s].
-verb:[Quiet/Minimal/Normal/Detailed]
Specifies the verbose level for console output messages.
-listavailable:[Filter by substring]
List all avilable packages from package source.
Options:
-filter:[All/Free/Acquired/Installed]. Default[Installed].
-sort:[Name/Vendor]. Default[Name].
-r Custom registry subkey for saving.
Im Dezember 2020 wurde Embarcadero GetIt eine Reihe neuer oder aktualisierter Bibliotheken hinzugefügt.
TChrome-Registerkarten sind eine umfassende Implementierung des Tab-Systems von Google Chrome. Zu den Funktionen gehören vollständig animierte Registerkarten, Drag & Drop, automatische Größenänderung und Positionierung, Text von rechts nach links, benutzerdefinierte Registerkartenformen, vollständige Demo und vieles mehr.
29. Dezember 2020 Mozilla Public License 1.1 (MPL 1.1)
ICS ist eine Delphi-Bibliothek, die aus vielen Internetkomponenten besteht, die alle wichtigen Protokolle und Anwendungen unterstützen. Alle Komponenten sind ereignisgesteuert und nicht blockierend. Blockierungsversionen für einfachere Anwendungen. Beinhaltet OpenSSL 1.1.1i.
23. Dezember 2020 Urheberrechtlich geschützte Freeware
ICS ist eine Delphi-Bibliothek, die aus vielen Internetkomponenten besteht, die alle wichtigen Protokolle und Anwendungen unterstützen. Alle Komponenten sind ereignisgesteuert und nicht blockierend. Blockierungsversionen für einfachere Anwendungen. Beinhaltet OpenSSL 1.1.1i.
23. Dezember 2020 Urheberrechtlich geschützte Freeware
SynEdit für Delphi und CBuilder. Syntax, die das Bearbeitungssteuerelement hervorhebt, basiert nicht auf den allgemeinen Windows-Steuerelementen. Unterstützt auf Plattformen: Windows.
EWriter eBooks sind die moderne Alternative zum veralteten CHM-Format für die lokale Anwendungshilfe. Sie bieten vollständige Unterstützung für kontextsensitive Hilfe und Dateilinks. Sie kombinieren die Vorteile von CHM und WebHelp und beseitigen die Nachteile beider. Das Paket enthält die Einheit Vcl.EwriterHelpViewer.pas, die die Unterstützung für das eWriter-Hilfeformat im Delphi-Hilfesystem implementiert. Anwendungen, die derzeit CHM-Dateien für die Anwendungshilfe verwenden, können fast unverändert zur eWriter-Hilfe wechseln.
21. Dezember 2020 GNU General Public License (GPL)
Kryptografie-Bibliothek mit Hash-Algorithmen, einem kryptografischen Pseudozufallszahlengenerator und CRC- und Formatkonvertierungsklassen sowie Demoprojekten und einer umfangreichen Dokumentation.
Moderne Apps haben mehrere Threads, und mit diesem Plugin können Sie parallel debuggen: genauso wie Ihr Code ausgeführt wird! Sehen Sie sich mehrere Aufrufstapel an, führen Sie jeden Thread anstelle des gesamten Prozesses aus oder führen Sie ihn aus, sehen Sie die Ausführung mehrerer Threads direkt im Code-Editor – und vieles mehr.
RAD Studio-Experte zum Erstellen von NSIS- und Inno Setup-Installationsprogrammen aus der IDE. Es integriert NSIS (Nullsoft Scriptable Install System) und Inno Setup in die IDE und ermöglicht das Erstellen und Erstellen von NSIS- und Inno Setup-Projekten (Installationsprogrammen) direkt in RAD Studio, wobei alle Vorteile einer gemeinsamen integrierten Umgebung genutzt werden!
Berichtsgenerator FastReport FMX ist eine moderne Lösung zur Integration von Business Intelligence in Ihre Software. Es wurde für Entwickler erstellt, die vorgefertigte Komponenten für die Berichterstellung verwenden möchten. FastReport FMX bietet mit seiner einfachen Bedienung, Bequemlichkeit und geringen Verteilungsgröße eine hohe Funktionalität und Leistung auf fast jedem modernen PC.
AQtime ist ein integriertes Profiler-Toolkit, mit dem Sie Leistungsengpässe sowie Speicher- und Ressourcenlecks in Ihren Anwendungen finden und auf einfache Weise beseitigen können. Mit AQtime Standard für Embarcadero RAD Studio können Sie 32-Bit-Native-Code-Anwendungen profilieren, die mit Embarcadero RAD Studio XE6 – XE8, 10, 10.1 erstellt wurden.
RVMedia ist eine Reihe von Komponenten zum Anzeigen von Videos aus verschiedenen Quellen, zum Steuern von IP-Kameras, zum Organisieren von Video-Chats sowie zum Aufzeichnen von Audio- und Videodateien.
RVMedia ist eine Reihe von Komponenten zum Anzeigen von Videos aus verschiedenen Quellen, zum Steuern von IP-Kameras, zum Organisieren von Video-Chats, zum Aufzeichnen von Audio- und Videodateien (Windows-Plattform).
Abbrevia ist ein Komprimierungs-Toolkit für Delphi, C ++ Builder, Kylix und Free Pascal. Unterstützt auf folgenden Plattformen: Windows, Android, OS X, iOS.
Abbrevia ist ein Komprimierungs-Toolkit für Delphi, C ++ Builder, Kylix und Free Pascal. Unterstützt auf folgenden Plattformen: Windows, Android, OS X, iOS.
Когда несколько выпусков назад FireMonkey представила предварительный просмотр FireUI, приложение было доступно в магазинах Google и Apple, чтобы упростить развертывание на устройстве. Это же приложение также является демонстрационным, доступным в виде исходного кода, но для его компиляции и развертывания на устройстве требуется несколько шагов настройки. Наличие предварительно созданной, готовой к использованию версии — хорошее преимущество.
Как только это будет установлено на вашем устройстве, вам необходимо включить View | Функция широковещательной передачи на устройства в RAD Studio IDE, откройте форму FireMonkey и из приложения подключитесь к IDE на указанном ПК (обратите внимание, что для этого требуется, чтобы ПК или виртуальная машина, на которой запущена IDE, и устройство находились в одной внутренней сети. ):
Теперь, если вы откроете форму в конструкторе в среде IDE, например:
Вы сразу увидите его на устройстве в предварительном просмотре приложения FireUI в правильном стиле, и любое изменение, внесенное в форму во время разработки, будет немедленно отражено в предварительном просмотре устройства:
Конечно, если вы откроете проект с более полным пользовательским интерфейсом, например демонстрацию Accelerometer, вы увидите, что он отображается на вашем устройстве перед его компиляцией и развертыванием:
Это помогает значительно сократить время, необходимое для разработки пользовательского интерфейса приложения и просмотра его на реальном устройстве, что значительно сокращает время разработки. В этой области FireMonkey предлагает гораздо лучший опыт, чем большинство других инструментов разработки для нескольких устройств!
Quando o FireMonkey apresentou o FireUI Preview alguns lançamentos atrás, o aplicativo estava disponível nas lojas do Google e da Apple para simplificar a implantação em um dispositivo. O mesmo aplicativo também é uma demonstração disponível como código-fonte, mas que requer algumas etapas de configuração para ser capaz de compilar e implantar em um dispositivo. Ter uma versão pré-construída pronta para usar é uma grande vantagem.
Assim que estiver instalado no seu dispositivo, você precisa habilitar o View | Transmita para o recurso de dispositivos do IDE RAD Studio, abra um formulário FireMonkey e, do aplicativo, conecte-se ao IDE no PC listado (observe que isso requer que o PC ou VM executando o IDE e o dispositivo estejam na mesma rede interna ):
Agora, se você abrir um formulário no designer no IDE, como o seguinte:
Você o verá imediatamente no dispositivo na visualização do aplicativo FireUI, com o estilo adequado, e qualquer alteração feita no formulário em tempo de design será imediatamente refletida na visualização do dispositivo:
Claro, se você abrir um projeto com uma IU mais completa, como a demonstração do Acelerômetro, você o verá sendo exibido no seu dispositivo, antes de compilá-lo e implantá-lo:
Isso ajuda a reduzir significativamente o tempo que leva para projetar a IU de um aplicativo e vê-lo em um dispositivo real, reduzindo significativamente o tempo de desenvolvimento. Nesta área, o FireMonkey oferece uma experiência muito melhor do que a maioria das outras ferramentas de desenvolvimento de vários dispositivos disponíveis!
Cuando FireMonkey introdujo FireUI Preview hace algunas versiones, la aplicación estaba disponible en las tiendas de Google y Apple, para simplificar la implementación en un dispositivo. La misma aplicación también es una demostración disponible como código fuente, pero requiere algunos pasos de configuración para poder compilar e implementar en un dispositivo. Tener una versión prediseñada lista para usar es una gran ventaja.
Una vez que esté instalado en su dispositivo, debe habilitar Ver | Transmitir a la función de dispositivos del RAD Studio IDE, abra un formulario FireMonkey y, desde la aplicación, conéctese al IDE en la PC indicada (tenga en cuenta que esto requiere que tanto la PC o VM que ejecutan el IDE como el dispositivo estén en la misma red interna ):
Ahora, si abre un formulario en el diseñador en el IDE, como el siguiente:
Lo verá inmediatamente en el dispositivo en la Vista previa de la aplicación FireUI, con el estilo adecuado, y cualquier cambio que realice en el formulario en el momento del diseño se reflejará inmediatamente en la vista previa del dispositivo:
Por supuesto, si abre un proyecto con una interfaz de usuario más completa, como la demostración del Acelerómetro, lo verá en su dispositivo, antes de compilarlo e implementarlo:
Esto ayuda a reducir significativamente el tiempo que lleva diseñar la interfaz de usuario de una aplicación y verla en un dispositivo real, lo que reduce significativamente el tiempo de desarrollo. En esta área, FireMonkey ofrece una experiencia mucho mejor que la mayoría de las otras herramientas de desarrollo de dispositivos múltiples que existen.
Als FireMonkey vor ein paar Jahren die FireUI-Vorschau vorstellte, war die Anwendung in den Google- und Apple-Stores verfügbar, um die Bereitstellung auf einem Gerät zu vereinfachen. Die gleiche App ist auch eine Demo, die als Quellcode verfügbar ist, aber das erfordert ein paar Konfigurationsschritte, um kompilieren und auf einem Gerät bereitstellen zu können. Eine vorgefertigte, einsatzbereite Version zu haben, ist ein netter Vorteil.
Sobald dies auf Ihrem Gerät installiert ist, müssen Sie die Funktion „View | Broadcast to devices“ der RAD Studio-IDE aktivieren, ein FireMonkey-Formular öffnen und von der App aus eine Verbindung zur IDE auf dem aufgeführten PC herstellen (beachten Sie, dass sich sowohl der PC oder die VM, auf dem die IDE läuft, als auch das Gerät im selben internen Netzwerk befinden müssen):
Wenn Sie nun ein Formular im Designer in der IDE öffnen, wie das folgende:
Sie sehen es sofort auf dem Gerät in der FireUI App Preview, mit dem richtigen Stil, und jede Änderung, die Sie zur Designzeit am Formular vornehmen, wird sofort in der Gerätevorschau angezeigt:
Wenn Sie ein Projekt mit einer vollständigeren Benutzeroberfläche öffnen, wie z. B. die Accelerometer-Demo, sehen Sie es natürlich auf Ihrem Gerät, bevor Sie es kompilieren und bereitstellen:
Dadurch wird die Zeit, die benötigt wird, um das UI einer App zu entwerfen und es auf einem tatsächlichen Gerät zu sehen, erheblich verkürzt, was die Entwicklungszeit deutlich reduziert. In diesem Bereich bietet FireMonkey eine viel bessere Erfahrung als die meisten anderen Multi-Device-Entwicklungstools da draußen!
Happy New Year! It’s time to review the big trends in JavaScript and technology in 2020 and consider our momentum going into 2021.
Our aim is to highlight the learning topics and technologies with the highest potential job ROI. This is not about which ones are best, but which ones have the most potential to land you (or keep you in) a great job in 2021. We’ll also look at some larger tech trends towards the end.
Language Rankings
JavaScript still reigns supreme on GitHub and Stack Overflow. Tip #1: Learn JavaScript, and in particular, learn functional programming in JavaScript. Most of JavaScript’s top frameworks, including React, Redux, Lodash, and Ramda, are grounded in functional programming concepts.
TypeScript jumped past PHP, and C# into 4th place, behind only Java, Python, and JavaScript. Python climbed past Java for 2nd place, perhaps on the strength of the rapidly climbing interest in AI and the PyTorch library for GPU-accelerated dynamic, deep neural networks, which makes experimentation with network structures easier and faster.
Source: GitHub State of the Octoverse, 2020
JavaScript is also #1 on Stack Overflow for the 8th year in a row. Python, Java, C#, PHP, and TypeScript beat out languages like C++, C, Go, Kotlin, and Ruby.
Frameworks
When it comes to front-end frameworks, a large majority of JavaScript developers use React, Vue.js, or Angular. jQuery still makes a surprisingly large showing, almost double the Vue.js showings, but it’s my guess that jQuery is used less in application work, and more in content sites and WordPress templates, so we’re going to exclude it this year.
Search Volume
React dominates search volume at 57.5%, with Angular collecting a large 31.5% share, and Vue.js picking up a respectable 11% slice.
*Methodology: All search trends were selected by topic rather than by keyword to exclude false positives.
Jobs
If you want to learn the framework that will give you the best odds of landing a job in 2021, your best bet is still React, and has been since 2017. React is mentioned in 47.6% of the listings which mention a common front-end framework, Angular picks up 41.2%, and Vue.js trails at 11.2%.
It’s important to mention that most job listings say that they require experience with one of a few named frameworks, but a large share of those listings are actually hiring for React work when you look at their listed tech stack, and will show preference to candidates with a strong knowledge of React. You’ll see some supporting evidence of that in the download trends, below.
*Methodology: Job searches were conducted on Indeed.com. To weed out false positives, I paired searches with the keyword “software” to strengthen the chance of relevance. I also omitted the “.js” from “Vue.js” because many listings don’t include the “.js”. All SERPS were sorted by date and spot checked for relevance.
Downloads
The npm download counts look fairly similar to the search trends, but reveal something interesting: The number of downloads for Angular 2+ and Vue.js are pretty much neck-and-neck, but if you add in the number of people using the old Angular framework, Angular has a solid lead over Vue.js in downloads.
Developer interest in TypeScript is undeniably strong, and growing rapidly. I predict that this trend will continue in 2021, and users will learn to work around some of the costs of using TypeScript (for example, by favoring interfaces over inline type annotations).
The number of jobs that specifically mention TypeScript is still relatively small, but some experience with TypeScript will slightly increase your odds of landing a job in 2021. By 2022, some experience with TypeScript might give you an edge in the job market. However, because it’s easier for a JavaScript developer to learn TypeScript than a completely new language, TypeScript teams are usually willing to hire and train good JavaScript developers.
Server Frameworks
On the server side, Express still dominates in download counts, so much so that it’s difficult to see how popular contenders are doing relative to each other.
As I predicted last year, excluding express, we see that Next.js has emerged as the top contender, which is unsurprising because Next.js is a flexible, full-stack, React-based framework which can help you deliver statically optimized content, but can also fall-back on serverless functions for API routes and SSR when you need to generate content dynamically. You can even statically generate content on-demand the first time it’s requested, and subsequently serve cached static content served from CDN — useful for apps based on user-generated content.
Next has many other advantages, including automatic optimization of page bundles, automatic image optimization with the new Image tag and built-in performance analytics to help you improve your user’s page load experience.
If you use GitHub and deploy on Vercel, you’ll also get automatic deploys for every PR, and a buttery smooth CI/CD pipeline. Essentially, it’s like having the best full-time DevOps team on staff, but instead of paying them salaries, you save a significant amount of money in hosting bills.
Expect Next.js to continue to explode in 2021.
Remote Work Trends
In 2020, teams were forced to learn to collaborate remotely by a global pandemic. In 2021, remote work will continue to be an important topic. First, because it will probably be June before vaccination against COVID-19 is widespread, and second, because a lot of teams experienced increased productivity and reduced costs during lockdown, manyemployeeswillnotreturnto offices in 2021.
Remote work has also led to more location freedom, prompting developers to move to places where they have access to things that are important to them, such as family and more affordable housing. Additionally, 72% of employers surveyed by KPMG said that remote work has widened their potential talent pool.
Remote-first and hybrid-remote teams will be the new normal in the new decade.
Average JavaScript Developer salaries dipped slightly in 2020, from $114k/year to $113k/year, according to Indeed, perhaps due in part to remote work expanding the employee pool beyond tech centers like San Francisco and New York, which tend to have a much higher cost of living, and demand higher salaries to compensate. The average JavaScript Developer salary in San Francisco is $130k.
Still, lots of companies with roots in San Francisco and other tech centers are paying remote workers somewhere between the US national average and San Francisco pay, which provides a premium on market rates to attract better talent, and still saves money over hiring locally and paying for office space.
Because of this trend, lots of remote jobs exist in the $115k — $130k range for mid-level developers. Senior developers often find jobs in the $120k — $150k range, regardless of location.
GitHub data suggests that rather than slowing down, teams were more productive working remotely in 2020. GitHub activity spiked when lockdowns began.
Source: GitHub State of the Octoverse, 2020
Volume of work on GitHub increased substantially, and average pull request merge times dropped by 7.5 hours.
Toss that onto the growing pile of evidence that remote work works.
Passwords are Obsolete
Passwords are obsolete, insecure technology and absolutely should not be used to protect your users or your app in 2021.
The crux of the matter is that about half of all users reuse passwords on multiple applications and websites, and attackers are financially incentivized to bring massive computing power to the problem of cracking your user’s passwords so they can try them on bank accounts, Amazon, etc.
If you’re not Google, Microsoft, or Amazon, chances are you can’t afford the computing power required to defend against modern password crackers. Don’t believe me? Check out HaveIBeenPwned. Spoiler: If you’ve used the internet, your passwords have been stolen.
I’ve been warning about the dangers of passwords for years, but in 2020, new optionsemerged which allow us to leave passwords behind, permanently. It was true in 2020, and it remains true: No new app should use passwords in 2021.
But once you leave passwords behind in exchange for cryptographic key pairs, your app also gains Web3 superpowers. Which leads me to the next topic: Crypto.
Crypto
Crypto will continue to be one of the most important and globally transformational technologies in 2021. Here are some highlights from 2020:
Bitcoin exploded to new all time highs, thanks in part to notable support from companies like PayPal. Expect more of the same in 2021.
Ethereum 2.0 beacon chain launched, which lays the groundwork for Ethereum to become a much more scalable platform. Additionally, scalability solutions such as side-chains and zkRollups gained momentum in 2020. Expect to see more DApps (Decentralized Apps) integrate those scaling solutions in 2021.
DeFi (Decentralized Finanance) is now a $15 billion market (up from $650 million when I wrote last year’s edition of this post), mostly operating on the Ethereum blockchain. Many multi-million-dollar exploits plagued the DeFi ecosystem in 2020. Smart contract security will continue to be a hot topic and huge opportunity in 2021.
Non-Fungible Tokens (NFTs) gained momentum in 2020, with several high profile sales of single tokens priced in the tens of thousands of dollars, each. Rarible introduced their own community token and began to airdrop it to marketplace users, fueling increased volume. Millions of dollar’s worth of NFTs are bought and sold daily, but this is just the beginning. Because they can represent virtually anything of value, the total addressable market is in the $trillions.
The Flow blockchain launched and brought with it lots of promise for mainstream blockchain adoption. NBA Top Shot has sold over $6 million in NBA-branded NFT moments, which represent short video clips of key moments in NBA games.
Theta Network launched smart contracts and NFTs. Among other things, NFTs will be used for stickers and badges on Theta.tv, a decentralized alternative to Twitch with millions of monthly active users.
Artificial Intelligence (AI)
2020 was a seminal year for AI. Via the GPT-3 launch, we learned that language models and transformers in general may be a viable path towards Artificial General Intelligence (AGI).
The human mind’s ability to generally solve a wide variety of problems by relating them to things we already know is known in AI circles as zero-shot and few-shot learning. We don’t need a lot of instruction or examples to take on tasks that are new to us. We can often figure out new kinds of problems with just a few (or no) examples (shots).
That general applicability of human cognitive skills is known as general intelligence. In AI, Artificial General Intelligence (AGI) is “the hypothetical intelligence of a machine that has the capacity to understand or learn any intellectual task that a human being can.”
GPT-3 demonstrated that it could teach itself math, how to code, how to translate text, and a virtually infinite variety of other skills via its gigantic training set which includes basically the whole public web (Common Crawl, WebText2, Books1, Books2, and Wikipedia), combined with its enormous model size. GPT-3 uses 175 billion parameters. For context, that’s an order of magnitude (10x) the previous state of the art, but still orders of magnitude smaller than the human brain.
Scaling up GPT-3 is likely to lead to even more breakthroughs in what it is capable of.
Self Driving Cars
In October 2020, Waymo began offering fully driverless rides (with no human in the driver seat) on 100% of their rides. At the time of launch, there were 1500 monthly active users and hundreds of cars serving the Phoenix metro area.
In December, 2020, General Motors’ Cruise launched fully driverless rides on the streets of San Francisco.
Drone Delivery
UPS launched 2 drone trials in 2020. One to deliver prescriptions to a retirement community in Florida, and another to deliver medical supplies including Personal Protective Equipment (PPE) between health care facilities in North Carolina.
Regulations, safety, noise, and technical challenges will likely continue to mean slow growth for Drone delivery services in 2021, but with continued COVID restrictions that will likely continue off and on through at least June, there has never been a better time to make quick progress on more efficient and contactless delivery.
Quantum Computing
Researchers in China have reported that they have achieved quantum supremacy that is 10 billion times faster than the quantum supremacy reported by Google last year. Researchers are making rapid progress, but quantum computing still requires extremely expensive hardware, and there are only a small handful of quantum computers in the world that have achieved any kind of quantum superiority.
Quantum-resistant cryptography, quantum-assisted cryptography, and quantum computing for machine learning are potential areas of focus where breakthroughs would have a significant industry-spanning, global impact. I believe that one day, the application of quantum computing in the field of AI will propel the technology forward many orders of magnitude — a feat that will have a profound impact on the human race.
In my opinion, that is unlikely to happen in the 2020s, but I expect to hear more quantum supremacy announcements in 2021, and perhaps breakthroughs in the variety of algorithms state of the art quantum computers can compute. We may also see more practical quantum-computing APIs services and use-cases.
Learn React, Redux, Next.js, TDD and more on EricElliottJS.com. Access a treasure trove of video lessons and interactive code exercises for members.
1:1 Mentorship is hands down, the best way to learn software development. DevAnywhere.io provides lessons on functional programming, React, Redux, and more, guided by an experienced mentor using detailed curriculum designed by Eric Elliott.
Eric Elliott is a tech product and platform advisor, author of “Composing Software”, cofounder of EricElliottJS.com and DevAnywhere.io, and dev team mentor. He has contributed to software experiences for Adobe Systems, Zumba Fitness,The Wall Street Journal,ESPN,BBC, and top recording artists including Usher, Frank Ocean, Metallica, and many more.
He enjoys a remote lifestyle with the most beautiful woman in the world.
С Change Views у вас есть запатентованный метод работы, который намного проще разработать и масштабировать для вашей клиентской базы. Используя InterBase Change Views, теперь вы можете точно определить, какие дельты были изменены на уровне поля.
Представления изменений — это модель на основе подписки, используемая для подписки на некоторые данные и определения того, какие данные были изменены в вашей базе данных InterBase. Вы создаете подписку, охватывающую разные таблицы и столбцы, и позволяете пользователям подписываться на изменения в данных.
Во время подключения вы можете получить предупреждение, а затем получить свои дельта-изменения или использовать компонент TFDEventAlerter в Delphi / C ++ Builder и даже изменения, связанные с цветовым кодом. Чтобы охватить соединение, вам не нужно быть подключенным, чтобы оно записывало изменения. После запуска транзакции базы данных у вас может быть активна подписка, затем отключиться от базы данных, а затем начать новое соединение с новой транзакцией в будущем.
С помощью Change Views вы:
Снижение затрат и дискового ввода-вывода за счет минимизации синхронизации данных
Слабое влияние на производительность
Нет внешних журнальных таблиц
Масштабируемость пользователей, даже мобильных
Отслеживайте изменения так, как хотите
Реализация представлений изменений с помощью FireDAC
Посмотрите это короткое 4-минутное видео об использовании просмотра изменений и о том, как начать отслеживать изменения данных.
Если вы хотите попробовать использовать представления изменений, вы можете взглянуть на демонстрационное приложение Generic Change Views, которое я продемонстрировал выше, которое поставляется с Delphi.
Con Change Views, tiene este método patentado con el que trabajar y que es mucho más fácil de desarrollar y escalar para su base de clientes. Con las vistas de cambio de InterBase, ahora puede identificar específicamente qué deltas han cambiado a nivel de campo.
Las vistas de cambio son un modelo basado en suscripción que se utiliza para suscribirse a algunos datos e identificar qué datos han cambiado en su base de datos de InterBase. Crea una suscripción que cubre diferentes tablas y columnas y permite a los usuarios los derechos para suscribirse a cambios en los datos.
Durante una conexión, puede recibir una alerta y luego buscar sus cambios delta o usar un componente TFDEventAlerter en Delphi / C ++ Builder e incluso cambios específicos del código de color. Para abarcar una conexión, no es necesario que esté conectado para que registre los cambios. Una vez que inicie una transacción de base de datos, puede tener una suscripción activa, luego desconectarse de su base de datos y luego iniciar una nueva conexión con una nueva transacción en un momento futuro.
Con Cambiar vistas, usted:
Reduzca los costos y la E / S del disco minimizando las sincronizaciones de datos
Tienen poco impacto en el rendimiento.
Sin tablas de registro externas
Escalabilidad de usuarios, incluso móviles
Realice un seguimiento de los cambios de la forma que desee
Implementación de vistas de cambio con FireDAC
Vea este breve video de 4 minutos sobre el uso de Cambiar vistas y cómo puede comenzar a rastrear cambios en los datos.
Si desea probar el uso de Cambiar vistas, puede echar un vistazo a la aplicación de muestra de Vistas genéricas de cambio que demostré anteriormente que se incluye con Delphi.
Mit Change Views können Sie mit dieser patentierten Methode arbeiten, die für Ihren Kundenstamm viel einfacher zu entwickeln und zu skalieren ist. Mithilfe von InterBase-Änderungsansichten können Sie jetzt genau identifizieren, welche Deltas sich auf Feldebene geändert haben.
Änderungsansichten sind ein abonnementbasiertes Modell, mit dem Sie einige Daten abonnieren und feststellen können, welche Daten sich in Ihrer InterBase-Datenbank geändert haben. Sie erstellen ein Abonnement, das verschiedene Tabellen und Spalten abdeckt, und gewähren Benutzern die Rechte, Änderungen an den Daten zu abonnieren.
Während einer Verbindung können Sie eine Warnung erhalten und dann Ihre Delta-Änderungen abrufen oder eine TFDEventAlerter- Komponente in Delphi / C ++ Builder und sogar farbcodespezifische Änderungen verwenden. Um eine Verbindung zu überspannen, müssen Sie nicht verbunden sein, um die Änderungen aufzuzeichnen. Sobald Sie eine Datenbanktransaktion gestartet haben, können Sie ein Abonnement aktivieren, die Verbindung zu Ihrer Datenbank trennen und zu einem späteren Zeitpunkt eine neue Verbindung mit einer neuen Transaktion herstellen.
Mit Ansichten ändern:
Reduzieren Sie Kosten und Festplatten-E / A, indem Sie die Datensynchronisierung minimieren
Haben wenig Einfluss auf die Leistung
Keine externen Protokolltabellen
Skalierbarkeit von Benutzern, auch von mobilen
Verfolgen Sie Änderungen nach Ihren Wünschen
Implementieren von Änderungsansichten mit FireDAC
In diesem kurzen 4-minütigen Video erfahren Sie, wie Sie Ansichten ändern verwenden und wie Sie Datenänderungen verfolgen können.
Wenn Sie die Verwendung von Änderungsansichten ausprobieren möchten, können Sie sich die oben gezeigte Beispielanwendung für generische Änderungsansichten ansehen, die im Lieferumfang von Delphi enthalten ist.
Заголовок этого сообщения в блоге может сочетать в себе две слишком знакомые идеи, но он очень подходит для поставленных нами целей. Изучение и преподавание Delphi имеют первостепенное значение для нашего успеха, и мы планируем продолжать делать это в новом году. Обучение Delphi, конечно, помогает популяризировать продукты Embarcadero, но, что более важно, оно служит нашему сообществу и вашим клиентам. Delphi великолепен, и новые разработчики упускают его. По мере приближения Нового года мы думаем о наших самых важных усилиях на 2021 год, которые можно сгруппировать под двумя широкими заголовками: продукт и образование.
Продукт
В первую очередь, конечно же, продукт. Два важных релиза, возможно три, уже находятся в разработке. 10.4.2 обеспечит важные улучшения качества, завершив работу по обновлению LSP и затронув многие другие области производительности. У группы разработчиков также есть более творческие идеи, чтобы открыть среду IDE и упростить создание новых функций. Мы также думаем о том, как улучшить процесс адаптации и добавить в RAD Studio больше функций с низким уровнем кода. 10.5 обещает стать действительно хорошим выпуском (см. Дорожную карту на 2020 г.), и мы, возможно, также сможем незаметно внедрить выпуск 10.5.1. RAD Studio — огромный продукт, и мы понимаем, что не каждый выпуск затрагивает все области, которые нуждаются в улучшении, но команда неустанно и с большим энтузиазмом работает над продвижением продукта вперед. Мы также углубляем сотрудничество с нашими многочисленными технологическими партнерами, чтобы внести больше успехов в сообщество Delphi. Наша общая цель проста: позволить вам, разработчикам, легче создавать лучшие продукты.
Образование
Второе направление — образование. До присоединения к команде Embarcadero моя работа была сосредоточена на расширении возможностей получения образования на международном уровне. В то время как я работал в совершенно другой отрасли, цель была той же — использовать образование как стартовую площадку для учащихся. Это начинается с контента и перерастает в сообщество. Многое уже было достигнуто в этом направлении с помощью Embarcadero Academy, Bootcamps, LearnDelphi и других инициатив, но мы хотим сделать больше. Мы хотим, чтобы разработчики могли найти обучение Delphi на каждой платформе онлайн-обучения. Мы хотим, чтобы больше образовательных систем применяли Delphi в своих компьютерных программах. Нам нужны лучшие инструменты для самообучения, а также более легкое обучение в RAD Studio.
Мы хотим научить, как Delphi соотносится и может использоваться с другими языками, такими как Python (в наших первых вебинарах участвовало более 4К человек). Мы также хотим более тесных связей с сообществом открытого исходного кода, что является ключом к инновациям. Мы продолжим продвигать как коммерческие, так и некоммерческие проекты, которые помогают развивать нашу экосистему. Есть ряд новых интересных книг по Delphi, в том числе:
Мы разработали несколько образовательных пакетов, которые помогут начать Новый год. Пользователи C ++ Builder получат доступ к курсам по продвинутому C ++ и RAD Studio на английском и немецком языках. Разработчики Delphi получат доступ к двум онлайн-курсам на английском языке: одному по мобильной разработке, а другому по RAD Server. Некоторые из наших партнеров стремятся предлагать эти курсы и на местных языках.
Эти образовательные пакеты доступны бесплатно для подписчиков, которые приобрели лицензии до конца года, и предлагаются помимо всех доступных скидок.
Мы также хотим услышать ваше мнение о том, как нам следует улучшить нашу образовательную деятельность. Мы проводим множество опросов, чтобы узнать, чего хочет наша аудитория, но я рекомендую вам связаться со мной напрямую. Если вы хотите предложить урок или иметь представление о том, как его упаковать, или планируете выпустить видео на Youtube, о котором мы должны знать, дайте нам знать.
2020 год был тяжелым годом для мира, но мы полны решимости приветствовать 2021 год на высокой ноте. Вот почему мы предлагаем огромную скидку 20 + 20 в конце года на все продукты. Нажмите здесь, чтобы воспользоваться единовременной скидкой 40% и начать работу в 2021 году!
Бонусное предложение: для тех из вас, кто еще не продлил поддержку и обслуживание, мы также предлагаем скидки на конец года. Свяжитесь с нашей командой по продлению, чтобы узнать, на что вы претендуете!
O título desta postagem do blog pode combinar duas ideias um tanto familiares demais, mas é altamente apropriado para os objetivos que estabelecemos. Aprender e ensinar Delphi é fundamental para nosso sucesso, e planejamos continuar fazendo isso no ano novo. A educação da Delphi ajuda a popularizar os produtos da Embarcadero, é claro, mas o mais importante, ela atende à nossa comunidade e aos seus clientes. Delphi é incrível, e novos desenvolvedores estão perdendo. À medida que nos aproximamos do Ano Novo, pensamos em nossos esforços mais importantes para 2021, que podem ser agrupados em dois grandes títulos: Produto e Educação.
produtos
A área de enfoque número um, é claro, é o produto. Dois lançamentos importantes, provavelmente três, já estão em andamento. 10.4.2 fornecerá melhorias de qualidade importantes, completando o esforço de atualização do LSP e abordando muitas outras áreas de produtividade. A equipe de produto também tem ideias mais criativas para abrir o IDE e tornar mais fácil a construção de novos recursos. Também estamos pensando em como melhorar ainda mais a integração e trazer mais recursos de baixo código para o RAD Studio. 10.5 está se preparando para ser um lançamento muito bom (veja o roteiro de 2020), e podemos introduzir uma versão 10.5.1 também. RAD Studio é um produto enorme e percebemos que nem todo lançamento afeta todas as áreas que precisam de melhorias, mas a equipe está trabalhando incansavelmente e com grande entusiasmo para fazer o produto avançar. Também estamos aprofundando a colaboração com nossos muitos parceiros de tecnologia para trazer mais avanços para a comunidade Delphi. Nosso objetivo comum é simples: permitir que vocês, desenvolvedores, criem produtos melhores com mais facilidade.
Educação
The second area of focus is Education. Prior to joining the Embarcadero team, my work was centered around expanding educational opportunities internationally. While I was in a much different industry the goal was the same, to use education as a launchpad for learners. This starts with content and builds into the community. Much has already been achieved in this direction with Embarcadero Academy, Bootcamps, LearnDelphi, and other initiatives, but we want to do more. We want to ensure developers can find Delphi education on every online training platform. We want more education systems to adopt Delphi for their computer curriculums. We want better self-learning tools, but also easier learning within RAD Studio.
Queremos ensinar como o Delphi se relaciona e pode ser usado com outras linguagens, como Python (tivemos mais de 4 K pessoas participando de nossos Webinars iniciais). Também queremos conexões mais fortes com a comunidade Open Source, que é a chave para a inovação. Continuaremos a promover projetos comerciais e não comerciais que ajudem a aumentar nosso ecossistema. Existem vários novos livros interessantes para a Delphi, incluindo:
Desenvolvemos vários pacotes educacionais para ajudar a iniciar o ano novo. Os usuários do C ++ Builder terão acesso a cursos em C ++ avançado e RAD Studio em inglês e alemão. Os desenvolvedores Delphi terão acesso a dois cursos online em inglês, um sobre desenvolvimento mobile e outro sobre RAD Server. Alguns de nossos parceiros estão empenhados em oferecer esses cursos também nos idiomas locais.
Esses pacotes educacionais estão disponíveis gratuitamente para assinantes que compram licenças antes do final do ano e são oferecidos além de todos os descontos disponíveis.
Também queremos ouvir sua opinião sobre como devemos melhorar nossos esforços de educação. Fazemos muitas pesquisas para descobrir o que nosso público deseja, mas encorajo você a entrar em contato comigo diretamente. Se você gostaria de oferecer uma aula ou tem uma idéia de como empacotar uma, ou está planejando lançar um vídeo do Youtube que devemos conhecer, nos avise.
2020 tem sido um ano difícil para o mundo, mas estamos determinados em dar as boas-vindas a 2021 com uma nota alta. É por isso que estamos oferecendo um grande desconto especial de final de ano de 20 + 20 em todos os produtos. Clique aqui para aproveitar este desconto único de 40% e comece a correr em 2021!
Oferta Bônus: Para aqueles que ainda não renovaram o suporte e a manutenção, também oferecemos descontos no fechamento do ano. Entre em contato com nossa equipe de renovações diretamente para ver o que você se qualifica!
El título de esta publicación de blog puede combinar dos ideas demasiado familiares, pero es muy apropiado para los objetivos que nos hemos fijado. Aprender y enseñar Delphi es primordial para nuestro éxito y planeamos seguir haciéndolo en el Año Nuevo. La educación de Delphi ayuda a popularizar los productos Embarcadero, por supuesto, pero lo más importante es que sirve a nuestra comunidad y a sus clientes. Delphi es increíble y los nuevos desarrolladores se lo están perdiendo. A medida que nos acercamos al Año Nuevo, pensamos en nuestros esfuerzos más importantes para 2021, que se pueden agrupar en dos grandes títulos: Producto y Educación.
Producto
El área de enfoque número uno, por supuesto, es Producto. Dos lanzamientos importantes, probablemente tres, ya están en proceso. 10.4.2 proporcionará importantes mejoras de calidad, completando el esfuerzo de actualización de LSP y abordando muchas otras áreas de productividad. El equipo de producto también tiene ideas más creativas para abrir el IDE y facilitar la creación de nuevas funciones. También estamos pensando en cómo mejorar aún más la incorporación y traer más funciones de código bajo a RAD Studio. 10.5 se perfila como una versión realmente agradable (consulte la hoja de ruta 2020), y es posible que también podamos introducir una versión 10.5.1. RAD Studio es un producto enorme y nos damos cuenta de que no todas las versiones afectan todas las áreas que necesitan mejoras, pero el equipo está trabajando incansablemente y con gran entusiasmo para hacer avanzar el producto. También estamos profundizando la colaboración con nuestros numerosos socios tecnológicos para aportar más avances a la comunidad de Delphi. Nuestro objetivo compartido es simple: permitirles a ustedes, los desarrolladores, crear mejores productos con mayor facilidad.
Educación
The second area of focus is Education. Prior to joining the Embarcadero team, my work was centered around expanding educational opportunities internationally. While I was in a much different industry the goal was the same, to use education as a launchpad for learners. This starts with content and builds into the community. Much has already been achieved in this direction with Embarcadero Academy, Bootcamps, LearnDelphi, and other initiatives, but we want to do more. We want to ensure developers can find Delphi education on every online training platform. We want more education systems to adopt Delphi for their computer curriculums. We want better self-learning tools, but also easier learning within RAD Studio.
Queremos enseñar cómo Delphi se relaciona y se puede usar con otros lenguajes como Python (tuvimos más de 4K personas que participaron en nuestros seminarios web iniciales). También queremos conexiones más sólidas con la comunidad de código abierto, que es clave para la innovación. Continuaremos promoviendo proyectos comerciales y no comerciales que ayuden a hacer crecer nuestro ecosistema. Hay una serie de libros nuevos e interesantes para Delphi, que incluyen:
Hemos desarrollado varios paquetes educativos para ayudar a dar inicio al Año Nuevo. Los usuarios de C ++ Builder tendrán acceso a cursos de C ++ avanzado y RAD Studio en inglés y alemán. Los desarrolladores de Delphi tendrán acceso a dos cursos en línea en inglés, uno sobre desarrollo móvil y otro sobre RAD Server. Algunos de nuestros socios se han comprometido a ofrecer estos cursos también en los idiomas locales.
Estos paquetes educativos están disponibles de forma gratuita para los suscriptores que adquieran licencias antes de fin de año y se ofrecen además de todos los descuentos disponibles.
También queremos escuchar sus opiniones sobre cómo deberíamos mejorar nuestros esfuerzos educativos. Hacemos muchas encuestas para descubrir lo que quiere nuestra audiencia, pero le animo a que se comunique conmigo directamente. Si desea ofrecer una clase o tiene una idea sobre cómo empaquetar una, o planea lanzar un video de Youtube que deberíamos conocer, háganoslo saber.
2020 ha sido un año difícil para el mundo, pero estamos decididos a dar la bienvenida a 2021 con una nota alta. Es por eso que ofrecemos un enorme descuento especial de fin de año de 20 + 20 en todos los productos. ¡Haga clic aquí para aprovechar este descuento único del 40% y empezar a trabajar en 2021!
Oferta de bonificación: para aquellos de ustedes que aún no hayan renovado su soporte y mantenimiento, también ofrecemos descuentos para cerrar el año. Póngase en contacto con nuestro equipo de renovaciones directamente para ver para qué califica.
Der Titel dieses Blog-Beitrags mag zwei etwas zu vertraute Begriffe vereinen, aber er ist sehr passend für die Ziele, die wir uns gesetzt haben. Das Lernen und Lehren von Delphi ist für unseren Erfolg von größter Bedeutung, und wir planen, im neuen Jahr mehr davon zu tun. Delphi-Schulungen helfen natürlich, Embarcadero-Produkte populär zu machen, aber noch wichtiger ist, dass sie unserer Community und Ihren Kunden dienen. Delphi ist großartig, und neue Entwickler verpassen es. Während wir uns dem neuen Jahr nähern, denken wir über unsere wichtigsten Bemühungen für 2021 nach, die unter zwei großen Überschriften gruppiert werden können: Produkt und Bildung.
Produkt
Der Schwerpunkt Nummer eins ist natürlich das Produkt. Zwei wichtige Versionen, wahrscheinlich drei, sind bereits in Arbeit. 10.4.2 wird wichtige Qualitätsverbesserungen bringen, das LSP-Upgrade abschließen und viele andere Produktivitätsbereiche ansprechen. Das Produktteam hat auch weitere kreative Ideen, um die IDE zu öffnen und die Erstellung neuer Funktionen zu erleichtern. Wir denken auch darüber nach, wie wir das Onboarding weiter verbessern und mehr Low-Code-Funktionen in RAD Studio einbringen können. 10.5 entwickelt sich zu einem wirklich schönen Release (siehe Roadmap 2020), und vielleicht können wir auch noch ein 10.5.1-Release einschieben. RAD Studio ist ein riesiges Produkt, und uns ist klar, dass nicht jedes Release alle Bereiche betrifft, die verbessert werden müssen, aber das Team arbeitet unermüdlich und mit erhöhtem Enthusiasmus daran, das Produkt voranzubringen. Wir vertiefen auch die Zusammenarbeit mit unseren vielen Technologiepartnern, um der Delphi-Community weitere Fortschritte zu ermöglichen. Unser gemeinsames Ziel ist einfach: Ihnen, den Entwicklern, die Möglichkeit zu geben, bessere Produkte einfacher zu erstellen.
Bildung
Der zweite Schwerpunktbereich ist die Ausbildung. Bevor ich dem Embarcadero-Team beitrat, konzentrierte sich meine Arbeit auf die Erweiterung von Bildungsmöglichkeiten auf internationaler Ebene. Obwohl ich in einer ganz anderen Branche tätig war, war das Ziel dasselbe: Bildung als Startrampe für Lernende zu nutzen. Das fängt bei den Inhalten an und baut sich in der Community auf. Mit der Embarcadero Academy, Bootcamps, LearnDelphi und anderen Initiativen wurde bereits viel in dieser Richtung erreicht, aber wir wollen noch mehr tun. Wir möchten sicherstellen, dass Entwickler Delphi-Schulungen auf jeder Online-Schulungsplattform finden können. Wir wollen, dass mehr Bildungssysteme Delphi in ihre Computer-Lehrpläne aufnehmen. Wir wollen bessere Selbstlernwerkzeuge, aber auch einfacheres Lernen innerhalb von RAD Studio.
Wir wollen lehren, wie Delphi mit anderen Sprachen wie Python zusammenhängt und verwendet werden kann (an unseren ersten Webinaren haben über 4K Personen teilgenommen). Wir wollen auch eine stärkere Verbindung mit der Open-Source-Community, die der Schlüssel zur Innovation ist. Wir werden weiterhin sowohl kommerzielle als auch nicht-kommerzielle Projekte fördern, die zum Wachstum unseres Ökosystems beitragen. Es gibt eine Reihe von spannenden neuen Büchern für Delphi, darunter:
Die aktualisierte Version von Marco Cantùs Object Pascal Handbook für 10.4 Sydney . Erfahren Sie mehr über all diese großartigen neuen Sprachfunktionen!
Wir haben verschiedene Bildungspakete entwickelt, um das neue Jahr zu beginnen. Benutzer von C ++ Builder erhalten Zugriff auf Kurse in fortgeschrittenem C ++ und RAD Studio in Englisch und Deutsch. Delphi-Entwickler erhalten Zugriff auf zwei Online-Kurse in Englisch, einen zur mobilen Entwicklung und einen zum RAD-Server. Einige unserer Partner sind bestrebt, diese Kurse auch in lokalen Sprachen anzubieten.
Diese Schulungspakete sind für Abonnenten, die vor Jahresende Lizenzen erwerben, kostenlos erhältlich und werden zusätzlich zu allen verfügbaren Rabatten angeboten.
Wir möchten auch Ihre Ansichten darüber hören, wie wir unsere Bildungsbemühungen verbessern sollten. Wir führen viele Umfragen durch, um herauszufinden, was unser Publikum will, aber ich ermutige Sie, sich direkt an mich zu wenden. Wenn Sie einen Kurs anbieten möchten oder eine Idee zum Verpacken eines Kurses haben oder ein Youtube-Video veröffentlichen möchten, über das wir Bescheid wissen sollten, lassen Sie es uns wissen.
2020 war ein hartes Jahr für die Welt, aber wir sind fest entschlossen, 2021 auf einem hohen Niveau zu begrüßen. Aus diesem Grund bieten wir einen riesigen Rabatt von 20 + 20 zum Jahresende auf alle Produkte. Klicken Sie hier , um diesen einmaligen Rabatt von 40% zu nutzen und 2021 loszulegen!
Bonusangebot: Für diejenigen unter Ihnen, die Ihren Support und Ihre Wartung noch nicht erneuert haben, bieten wir auch Rabatte zum Jahresende an. Wenden Sie sich direkt an unser Erneuerungsteam, um zu erfahren, wofür Sie sich qualifizieren!
Два популярных бесплатных дополнения IDE для навигации по коду теперь доступны для версии 10.4.
В 10.3.1 мы начали поставки двух популярных плагинов IDE, предназначенных для навигации по коду. Закладки заменяют закладки редактора IDE неограниченным количеством маркеров, новыми маркерами каретки (навигационной цепочкой), защитой от случайной перезаписи, закрепляемым окном с контекстной информацией о каждой закладке и т. Д. Navigator добавляет в редактор миникарту (альтернатива полосы прокрутки, показывающая предварительный просмотр кода) и окно Go To, которое позволяет быстро переходить к любой полезной части вашего устройства с помощью клавиатуры, будь то метод, объявление класса или свойство. или даже функция реализации свойства.
Вы можете узнать больше о закладках и навигаторе , включая все функции повышения производительности.
Эти два плагина теперь доступны для RAD Studio 10.4 в GetIt (см. Раздел IDE Plugins слева). Важно отметить, что они также обновлены для последних функций языка Delphi, включая встроенные переменные.
Как раз к Рождеству мы выполняем свое обещание и представляем вам новый технический документ — Разработка ЛУЧШЕЙ среды разработки с помощью сравнительного анализа . В этом документе исследуются три платформы — Delphi , Windows Presentation Foundation (WPF) с .NET Framework и Electron — с использованием 23-метрической взвешенной оценки, чтобы определить, какая из них обеспечивает лучшую продуктивность разработчика , бизнес- функциональность , гибкость приложений и производительность продукта . Этот первый раунд выбрал клон калькулятора Windows 10 в качестве эталона для проверки способности каждой платформы воссоздавать известный графический интерфейс и ориентироваться на среду рабочего стола Windows.
Рисунок 1 — Схема оценочной взвешенной оценки
Наши результаты, вероятно, неудивительны для разработчиков Delphi — Delphi VCL и FMX взорвали конкуренцию, набрав 4,66 балла из 5. Electron был на втором месте с 3,11 баллами, а WPF занял последнее место. Помимо оценок, качественный и количественный анализ позволяет сделать несколько выводов:
Delphi и его среда разработки RAD Studio значительно повышают продуктивность разработки и ускоряют вывод продукта на рынок. Мало того, разработка единой базы кода для всех настольных и мобильных платформ упрощает последующие выпуски и сопровождение продукта.
WPF с .NET Framework предлагает небольшим группам собственный доступ к приложениям Windows и надежную среду IDE, но изо всех сил пытается соответствовать продуктивности, безопасности IP и производительности Delphi, а также отсутствуют кроссплатформенные функции Delphi и Electron.
Electron предлагает бесплатную альтернативу Delphi и WPF, знакомство с интерфейсными разработчиками и кросс-платформенные возможности за счет защиты IP, стандартных инструментов IDE и производительности приложений.
Рисунок 2 — Сравнение Delphi, WPF и Electron по 4 категориям
Эта статья предназначена для начала разговора! Полный исходный код этого проекта доступен на GitHub, чтобы вы могли его изучить и улучшить. Прочтите статью, напишите в ответ сообщение в блоге или попробуйте приложение-калькулятор получше и отправьте запрос на перенос на GitHub, чтобы мы могли добавить его в репозиторий. Нашли ошибку? Задайте вопрос, чтобы мы могли улучшить этот проект и собрать лучшие практики и методы для каждой платформы. Никто из нас не так умен, как все!
Bem a tempo para o Natal, estamos cumprindo nossa promessa de apresentar a você um novo white paper – Developing the BEST Developer Framework through Benchmarking . Este artigo examina três estruturas – Delphi , Windows Presentation Foundation (WPF) com .NET Framework e Electron – usando uma avaliação ponderada de 23 métricas para determinar qual oferece a melhor produtividade do desenvolvedor , funcionalidade de negócios , flexibilidade de aplicativo e desempenho do produto . Esta primeira rodada escolheu um clone de calculadora do Windows 10 como referência para examinar a capacidade de cada estrutura de recriar uma GUI conhecida e direcionar o ambiente de área de trabalho do Windows.
Figura 1 – Esquema de avaliação com pontuação e peso
Nossos resultados provavelmente não surpreendem os desenvolvedores Delphi – Delphi VCL e FMX tiraram a competição da água, marcando 4,66 pontos em 5. Electron estava em um distante segundo lugar com 3,11 pontos e WPF ficou em último lugar. Pontuações à parte, a análise qualitativa e quantitativa fornece algumas conclusões:
Delphi e seu RAD Studio IDE aumentam profundamente a produtividade do desenvolvimento e o tempo de lançamento do produto no mercado. Além disso, desenvolver uma base de código para alcançar todas as plataformas móveis e de desktop simplifica lançamentos sucessivos e manutenção de produtos.
O WPF com o .NET Framework oferece a pequenas equipes entrada nativa para aplicativos do Windows e um IDE sólido, mas se esforça para corresponder à produtividade, segurança IP e desempenho do Delphi, ao mesmo tempo em que falta os recursos de plataforma cruzada do Delphi e Electron.
Electron oferece uma alternativa gratuita para Delphi e WPF, familiaridade para desenvolvedores front-end e capacidade de plataforma cruzada ao custo de proteção IP, ferramentas IDE padrão e desempenho de aplicativo.
Figura 2 – Comparação Delphi, WPF e de elétrons de 4 categorias
Este artigo foi feito para iniciar uma conversa! Todo o código-fonte deste projeto está disponível no GitHub para você examinar e melhorar. Leia o artigo, escreva uma postagem no blog em resposta ou um aplicativo de calculadora melhor e envie uma solicitação de pull do GitHub para que possamos adicioná-la ao repositório. Encontrou um erro? Envie um problema para que possamos melhorar este projeto e coletar as melhores práticas e técnicas para cada estrutura. Nenhum de nós é tão inteligente quanto todos nós!
Justo a tiempo para Navidad, estamos cumpliendo nuestra promesa de presentarles un nuevo informe técnico : Desarrollo del MEJOR marco para desarrolladores a través de la evaluación comparativa . Este documento examina tres marcos: Delphi , Windows Presentation Foundation (WPF) con .NET Framework y Electron , utilizando una evaluación ponderada de 23 métricas para determinar cuál ofrece la mejor productividad para el desarrollador , funcionalidad comercial , flexibilidad de aplicaciones y rendimiento del producto . Esta primera ronda eligió un clon de la calculadora de Windows 10 como punto de referencia para examinar la capacidad de cada marco para recrear una GUI conocida y apuntar al entorno de escritorio de Windows.
Figura 1 – Esquema de evaluación ponderado puntuado
Nuestros resultados probablemente no sorprenden a los desarrolladores de Delphi: Delphi VCL y FMX sacaron a la competencia del agua, anotando 4.66 puntos de 5. Electron quedó en un distante segundo lugar con 3.11 puntos y WPF ocupó el último lugar. Dejando a un lado las puntuaciones, el análisis cualitativo y cuantitativo proporciona algunas conclusiones:
Delphi y su RAD Studio IDE mejoran profundamente la productividad del desarrollo y el tiempo de comercialización del producto. No solo eso, desarrollar una base de código para llegar a todas las plataformas de escritorio y móviles simplifica las versiones sucesivas y el mantenimiento del producto.
WPF con .NET Framework ofrece a los pequeños equipos una entrada nativa a las aplicaciones de Windows y un IDE sólido, pero tiene dificultades para igualar la productividad, la seguridad IP y el rendimiento de Delphi, mientras que también le faltan las funciones multiplataforma de Delphi y Electron.
Electron ofrece una alternativa gratuita a Delphi y WPF, familiaridad para los desarrolladores front-end y capacidad multiplataforma a costa de la protección IP, las herramientas IDE estándar y el rendimiento de la aplicación.
Figura 2 – Comparación de Delphi, WPF y Electrones de 4 categorías
Puede descargar este documento de forma gratuita visitando lp.embarcadero.com/Discovering_the_best_framework , ingresando su dirección de correo electrónico y siguiendo un enlace enviado a su bandeja de entrada.
Comentarios de la comunidad
¡Este documento está destinado a iniciar una conversación! El código fuente completo de este proyecto está disponible en GitHub para que lo examine y mejore. Lea el documento, escriba una publicación de blog en respuesta o una mejor aplicación de calculadora, y envíe una solicitud de extracción de GitHub para que podamos agregarla al repositorio. ¿Encontraste un error? Envíe un problema para que podamos mejorar este proyecto y recopilar las mejores prácticas y técnicas para cada marco. ¡Ninguno de nosotros es tan inteligente como todos nosotros!
Pünktlich zu Weihnachten halten wir unser Versprechen ein , Ihnen ein neues Whitepaper zu bringen – die Entwicklung des BEST Developer Framework durch Benchmarking . Dieser Beitrag untersucht drei Frameworks – Delphi , Windows Presentation Foundation (WPF) mit .NET Framework und Electron – eine 23-metric gewichtete Auswertung über die Angebote der besten Entwickler , um zu bestimmen Produktivität , Business – Funktionalität , Anwendungsflexibilität und Produktleistung . Diese erste Runde wählte Ein Windows 10-Taschenrechner-Klon als Benchmark, um die Fähigkeit jedes Frameworks zu untersuchen, eine bekannte GUI neu zu erstellen und auf die Windows-Desktopumgebung abzuzielen.
Unsere Ergebnisse sind für Delphi-Entwickler wahrscheinlich nicht überraschend – Delphi VCL und FMX haben die Konkurrenz aus dem Wasser geworfen und 4,66 von 5 Punkten erzielt. Electron war mit 3,11 Punkten eine entfernte Sekunde und WPF belegte den letzten Platz. Abgesehen von den Ergebnissen liefert die qualitative und quantitative Analyse einige Schlussfolgerungen:
Delphi und seine RAD Studio IDE verbessern die Entwicklungsproduktivität und die Markteinführungszeit des Produkts erheblich. Darüber hinaus vereinfacht die Entwicklung einer Codebasis für alle Desktop- und Mobilplattformen aufeinanderfolgende Releases und Produktwartungen.
WPF mit .NET Framework bietet kleinen Teams nativen Zugang zu Windows-Anwendungen und eine solide IDE, hat jedoch Schwierigkeiten, die Produktivität, IP-Sicherheit und Leistung von Delphi zu erreichen, während die plattformübergreifenden Funktionen von Delphi und Electron fehlen.
Electron bietet eine kostenlose Alternative zu Delphi und WPF, Vertrautheit mit Front-End-Entwicklern und plattformübergreifende Funktionen auf Kosten des IP-Schutzes, der Standard-IDE-Tools und der Anwendungsleistung.
Abbildung 2 – Vergleich von Delphi, WPF und Elektronen der Kategorie 4
Sie können dieses Dokument kostenlos herunterladen , indem Sie lp.embarcadero.com/Discovering_the_best_framework besuchen , Ihre E-Mail-Adresse eingeben und einem Link folgen, der an Ihren Posteingang gesendet wurde.
Community-Feedback
Dieses Papier soll ein Gespräch beginnen! Der gesamte Quellcode für dieses Projekt steht auf GitHub zur Verfügung, damit Sie ihn untersuchen und verbessern können. Lesen Sie das Papier, schreiben Sie einen Blog-Beitrag als Antwort oder eine bessere Taschenrechneranwendung und senden Sie eine GitHub- Pull-Anfrage, damit wir sie dem Repository hinzufügen können. Fehler gefunden? Reichen Sie ein Problem ein , damit wir dieses Projekt verbessern und Best Practices und Techniken für jedes Framework sammeln können. Keiner von uns ist so schlau wie wir alle!
Antes do RAD Studio 10.4, o padrão da opção de isolamento de transação para conexões FireDAc era lido e confirmado. Isso é TFDConnection.TxOptions.Isolation foi xiReadCommitted. Este foi o valor definido para o componente e, sendo o padrão, ele não foi enviado para o banco de dados. FireDAC apenas assumiu que o valor padrão em sua configuração era o padrão do banco de dados e não enviou explicitamente essa configuração de isolamento para o banco de dados no início de uma sessão. Por exemplo, no MySQL o comando necessário é SET SESSION TRANSACTION ISOLATION LEVEL não foi executado se o padrão não foi modificado. Nesse caso, o isolamento da transação do MySQL permanecia com o valor padrão do banco de dados, que é xiRepeatableRead, independentemente do que foi definido na configuração padrão do FireDAC.
Para corrigir esse problema, decidimos que é melhor manter o nível de isolamento padrão em xiUnspecified, o que significa que se você não precisar de um nível de isolamento específico, a configuração padrão para o banco de dados específico é usada – esta é a que já está predefinida int O banco de dados e você não precisa pedir configuração. O nível de isolamento padrão é otimizado para um banco de dados específico, pois os padrões são diferentes e alguns dos níveis de isolamento nem mesmo são suportados por todos os mecanismos de banco de dados da mesma forma.
Se um desenvolvedor deseja usar um nível de isolamento diferente do padrão usado pelo banco de dados, ele deve ser definido explicitamente na configuração do componente ou no código. Se o desenvolvedor quiser usar o banco de dados padrão, nenhum código é necessário.
Aqui estão os níveis de isolamento padrão dos principais bancos de dados em termos FireDAC – novamente, o valor da propriedade é apenas xiUnspecified:
DB2 – xiReadCommitted
InterBase e Firebird – xiSnapshot
MySQL e MariaDB – xiRepeatableRead
Oracle – xiReadCommitted
Microsoft SQL Server – xiReadCommitted
SQLite – xiSerializible
PostgreSQL – xiReadCommitted
Espero que isso ajude a esclarecer a mudança e explicar como contorná-la, alterando o nível de isolamento da transação de sua conexão FireDAC para aquele de que você precisa especificamente, se diferente do padrão do banco de dados. Esta mudança não foi listada nas notas de lançamento do RAD Studio 10.4 e causou algumas preocupações (e relatórios de erros).
Недавно выпущенный AMD Ryzen 9 5950x предлагает 16 ядер и 32 потока, поэтому давайте посмотрим, какую производительность мы можем получить от параллельной компиляции C ++ с этими 32 потоками. На момент написания этой статьи AMD Ryzen 9 5950x имел наивысший балл в тесте одноядерного процессора — около 3515. C ++ Builder — это инструмент быстрой разработки приложений для создания приложений Windows на C ++. Он предлагает обычную компиляцию и в самой последней версии включает надстройку TwineCompile, которая будет использовать все 32 потока от процессора Ryzen 5950x для одновременной компиляции нескольких файлов в проекте C ++. Мы сделали два предыдущих поста, в которых мы протестировали 5950x с ~ 750 тыс. Строккода, скомпилированным в Delphi, и параллельным построением 300 собственных приложений Windows в Delphi..
Проект, который я использовал для тестирования параллельной компиляции C ++, представляет собой большое приложение для Windows на C ++ со 128 формами и, согласно C ++ Builder, ~ 254 000 строк на C ++. Формы взяты из 50 форм проекта, которые можно найти в этом репозитории C ++ Cross Platform Samples . Мы использовали 50 форм 2 и 3 раза, чтобы получить число 128. Первоначально мы создали этот проект для тестирования AMD Ryzen Threadripper 3990x, который имеет 64 ядра и 128 потоков. В любом случае, когда у нас было 128 форм в проекте, мы добавили некоторый общий C ++ к каждому из 128 модулей, чтобы увеличить их количество до 1000 строк в каждой. Помните, что каждый проект представляет собой отдельную рабочую нагрузку, и результаты в ваших собственных проектах могут отличаться. Различные функции языка C ++ и конфигурации проекта могут влиять на время компиляции.
Ryzen Изображение любезно предоставлено AMD
Полные спецификации тестового компьютера AMD Ryzen 9 5950x: AMD Ryzen 9 5950x, ОЗУ DDR4 3200 МГц объемом 64 ГБ, твердотельный накопитель NVMe 1 ТБ + жесткий диск 2 ТБ, NVIDIA GeForce RTX 3070 8 ГБ и Windows 10 Pro. Чтобы отслеживать использование CPU и Disk IO в параллельной компиляции C ++ Builder, я использовал диспетчер задач DeLuxe или TMX (который также встроен в Delphi). Диспетчер задач DeLuxe впечатляет объемом информации о вашей системе Windows. TMX доступен в MiTeC который также создает широкий спектр компонентов Delphi, которые предоставляют вам доступ к той же информации, что и в TMX. Ниже представлен обзор потоков 32 CPU, который предоставляет TMX. Я сделал этот снимок экрана во время обычной синхронной компиляции C ++ Builder. На скриншоте видно, что на самом деле для компиляции одновременно используется только одно ядро.
Теперь давайте посмотрим на снимок экрана из диспетчера задач DeLuxe вскоре после параллельной компиляции C ++ с использованием TwineCompile в C ++ Builder. На этом снимке экрана вы увидите, что он использует все потоки для компиляции. Вы можете увидеть, как он использовал все 32 потока, а TMX также предоставляет удобный монитор тактовой частоты процессора, поскольку AMD Ryzen 9 5950x Turbo увеличивает до 4,9 ГГц (только 4,2 ГГц на скриншоте). Здесь следует отметить одну интересную вещь: поскольку турбо-ускорение с 3,9 ГГц до 4,9 ГГц не является согласованным, тесты меняются на несколько секунд при каждом запуске.
Если вы хотите узнать больше об архитектуре процессора AMD Ryzen 9 5950x, у AMD есть отличное видео, в котором объясняется архитектура Zen 3.
Приступим к сравнению чисел. В C ++ Builder можно выполнить несколько различных сборок. Это включает сборку отладки (-O0) и сборку выпуска. В сборке Release можно выбрать различные флаги оптимизации (-O1, -O2 и -O3). У каждого флага своя цель оптимизации. -O1 генерирует минимально возможный код, -O2 генерирует максимально быстрый код, а -O3 генерирует наиболее оптимизированный код. Согласно Embarcadero -O3 дает увеличение скорости вдвое по сравнению с -O2.
Сборки отладки — самые быстрые из четырех уровней оптимизации. В основном это имеет значение при использовании обычной компиляции, потому что сборки Release занимали на минуту больше, чем сборки Debug. При использовании параллельной компиляции процесс сборки был настолько быстрым как в режиме отладки, так и в режиме выпуска, что это не имело значения, поскольку все оценки довольно близки. Первая диаграмма здесь — это нормальная отладочная сборка C ++ (-O0), приходящаяся на 396 секунд, по сравнению с параллельной отладочной сборкой C ++ (-O0), приходящая на 33 секунды (в 12 раз быстрее!). Если мы запускаем числа по строкам кода в секунду, мы получаем около 7 696 строк кода в секунду, используя параллельный TwineCompile для -O0. Обычная отладочная синхронная сборка -O0 занимает 641 строку в секунду для компиляции.
На второй диаграмме мы видим, что нормальная сборка выпуска C ++ (-O1) приходит через 404 секунды, а сборка параллельной версии C ++ (-O1) — за 32 секунды (примерно в 12 раз быстрее!). Секунды параллельной сборки варьируются в зависимости от текущей скорости турбо-ускорения (от 3,9 до 4,9 ГГц). Если мы запустим числа по строкам кода в секунду, мы получим около 7937 строк кода в секунду, используя параллельный TwineCompile для -O1. Обычная синхронная сборка -O1 требует компиляции 628 строк в секунду.
На третьей диаграмме мы видим, что нормальная сборка выпуска C ++ (-O2) приходит через 449 секунд, тогда как сборка параллельной версии C ++ (-O2) приходит через 37 секунд (примерно в 12 раз быстрее!). Секунды параллельной сборки варьируются в зависимости от текущей скорости турбо-ускорения (от 3,9 до 4,9 ГГц). Если мы запустим числа по строкам кода в секунду, мы получим около 6864 строк кода в секунду, используя параллельный TwineCompile для -O2. Обычная синхронная сборка -O2 требует компиляции 565 строк в секунду.
На четвертой и последней диаграмме у нас есть сборка выпуска Normal C ++ (-O3), выходящая за 450 секунд, по сравнению с сборкой выпуска Parallel C ++ (-O3), приходящаяся на 36 секунд (примерно в 12 раз быстрее!). Секунды параллельной сборки варьируются в зависимости от текущей скорости турбо-ускорения (от 3,9 до 4,9 ГГц). Я видел здесь от 36 до 40 секунд. Если мы запустим числа по строкам кода в секунду, мы получим около 7055 строк кода в секунду, используя параллельный TwineCompile для -O3. Обычная синхронная сборка -O3 требует компиляции 564 строк в секунду.
Достаточно сказать, что параллельная компиляция значительно повысила производительность. Возможность скомпилировать большое приложение на C ++ примерно за 30 секунд позволяет выполнять итерацию быстрее (аналогично скорости итерации, которую можно выполнить в Delphi), потому что время компиляции очень быстрое. Я называю 128 форм и ~ 254 тыс. Строк кода для проекта Windows большим. Это, конечно, не маленький проект (2-3 формы) и уж точно не масштабный проект (миллионы и миллионы строк кода).
Теперь сравним компилятор Delphi 10.4.1 с параллельной компиляцией C ++ Builder. В нашем первом блогеВ этой серии ЦП AMD Ryzen 9 5950x компилирует обобщенный тяжелый код Object Pascal со скоростью около 61 500 строк в секунду, который может быть экстраполирован на 1 миллион строк тяжелого универсального кода Object Pascal за 16 секунд. Самая быстрая параллельная сборка C ++ Builder (-O1) компилирует 7937 строк кода в секунду, которые можно экстраполировать на 1 миллион строк C ++ за ~ 126 секунд. Тот же самый синхронный компилятор C ++ Builder -O1 составлял 628 строк кода в секунду, которые можно экстраполировать на 1 миллион строк кода C ++ за 1592 секунды. Как видите, параллельная компиляция C ++ Builder приближается к производительности Delphi со скоростью компиляции, поскольку она на порядки быстрее, чем обычная компиляция.C ++ Builder с параллельной компиляцией на современном оборудовании через TwineCompile может приблизить вас к производительности Delphi со скоростью и мощью C ++ для ваших приложений Windows.
Современное оборудование есть, и AMD Ryzen 9 5950x великолепен с его 16 ядрами и 32 потоками, но процессор Ryzen 9 5950x на самом деле трудно достать в настоящий момент. А как насчет использования TwineCompile на более старой машине? На самом деле я использовал i7-3770 с 4 ядрами и 8 потоками в течение последних 8 лет в качестве своего ежедневного драйвера. Технические характеристики этой машины — примерно Intel i7-3770, 16 ГБ ОЗУ, 1 ТБ SSD, Windows 10 Home. Это CPU тест оценка для одной нити 2069 против 3515 на 5950x. Единственное обновление, которое я действительно внес в него за 8 лет, — это установка SSD Samsung 860 EVO 1 ТБ, и это сильно изменило время компиляции. Я снова использовал диспетчер задач DeLuxe и сделал скриншоты обычной компиляции и параллельной компиляции на 8-поточной машине i7-3770. Сначала мы покажем обычную компиляцию в C ++ Builder. Как вы увидите на снимке экрана, для компиляции кода C ++ используется всего около 30% ЦП.
Затем давайте снова посмотрим на машину i7-3770, на этот раз используя C ++ Builder, параллельную компиляцию того же проекта из 128 форм и около 254 000 строк кода. Как вы увидите, на этот раз он задействует все 4 ядра и 8 потоков и использует всю мощность машины для компиляции.
Давайте посмотрим на некоторые цифры из этой машины при синхронной и параллельной компиляции того же проекта 128 form C ++ Builder. Первая диаграмма здесь — это нормальная отладочная сборка C ++ (-O0), приходящаяся на 1023 секунды, по сравнению с параллельной отладочной сборкой C ++ (-O0), приходящая на 170 секунд (в 6 раз быстрее!). Если мы запускаем числа по строкам кода в секунду, мы получаем около 1494 строк кода в секунду, используя параллельный TwineCompile для -O0. Обычная синхронная отладка -O0 компилируется со скоростью 248 строк в секунду.
Вторая диаграмма здесь — это нормальная сборка выпуска C ++ (-O2), поступающая через 935 секунд, по сравнению с сборкой параллельной версии C ++ (-O2), поступающая через 142 секунды (примерно в 6 раз быстрее!). Если мы запустим числа по строкам кода в секунду, мы получим около 1788 строк кода в секунду, используя параллельный TwineCompile для -O2. Обычная синхронная отладочная сборка -O2 требует компиляции со скоростью 271 строку в секунду. Здесь я вижу одну интересную вещь: на машине AMD Ryzen 9 3950x сборки отладки были быстрее, чем сборки выпуска, где, как и на более старой машине, здесь сборки отладки медленные. У меня нет точных цифр, но я предполагаю, что это может быть из-за того, что отладочные сборки больше, чем сборки релизов, и поэтому в игру вступает скорость твердотельного жесткого диска.
Как вы можете видеть, даже на более старом оборудовании параллельная компиляция C ++ Builder обеспечивает ОГРОМНЫЙ прирост производительности с гораздо более быстрым временем компиляции. Если у вас более старая машина, и у вас нет SSD, такого как Samsung 860 EVO, это простое обновление, позволяющее добиться гораздо большей производительности по сравнению с обычным жестким диском. Или, если вы используете более старую машину, которая не является, по крайней мере, четырехъядерным кодом, вы можете выбрать старые четырехъядерные машины по относительно низкой цене.
В любом случае, независимо от используемого оборудования (если оно имеет как минимум 2 ядра), вы увидите значительное увеличение времени компиляции для ваших проектов C ++ при использовании последней версии C ++ Builder с параллельной компиляцией через TwineCompile. В этом сообщении блога мы протестировали последний AMD Ryzen 9 5950x с его 16 ядрами и 32 потоками и убедительно показали, что он может иметь огромное значение для повышения вашей производительности за счет скорости итераций. Относительно большой проект Windows C ++ со 128 формами и более 254 000 строк кода может быть скомпилирован примерно за 30-40 секунд с помощью параллельной компиляции с 16 ядрами и 32 потоками. Это невероятно. На более старой машине, использующей обычную синхронную компиляцию, на один и тот же проект требовалось от ~ 15 минут до ~ 17 минут!
Сейчас прекрасное время, чтобы стать разработчиком C ++ для создания приложений Windows (и iOS) на C ++. Мы видели, как одно ядро на старом оборудовании может занять 60 минут для компиляции проекта C ++ с 1 миллионом строк кода, который теперь занимает всего ~ 2 минуты при параллельной компиляции на современном оборудовании! Параллельная компиляция обеспечивает столь необходимую продуктивность разработки на C ++ без ущерба для скорости и мощности производительности C ++ во время выполнения. C ++ Builder 10.4.1+ — это инструмент, который поможет вам в этом.
O recém-lançado AMD Ryzen 9 5950x oferece 16 núcleos e 32 threads, então vamos ver que tipo de desempenho podemos obter de uma compilação paralela de C ++ com esses 32 threads. No momento em que este artigo foi escrito, o AMD Ryzen 9 5950x tinha a pontuação de benchmark de CPU de núcleo único mais alta, em torno de 3515. C ++ Builder é uma ferramenta de desenvolvimento de aplicativos rápida para criar aplicativos C ++ do Windows. Ele oferece compilação normal e na versão mais recente inclui um add-on chamado TwineCompile que usará todos os 32 threads do Ryzen 5950x powerhouse para compilar vários arquivos no projeto C ++ simultaneamente. Fizemos dois posts anteriores onde comparamos o 5950x com ~ 750k linhas de compilação de código em Delphi e construção paralela de 300 aplicativos nativos do Windows em Delphi.
O projeto que usei para testar a compilação paralela C ++ é um grande aplicativo C ++ do Windows com 128 formulários e, de acordo com o C ++ Builder, aproximadamente 254.000 linhas de C ++. Os formulários são retirados dos 50 formulários de projeto encontrados neste repositório C ++ Cross Platform Samples . Usamos os 50 formulários 2 e 3 vezes para chegar ao número 128. Originalmente, construímos este projeto para comparar o AMD Ryzen Threadripper 3990x, que tem 64 núcleos e 128 threads. Em qualquer caso, uma vez que tínhamos 128 formulários no projeto, adicionamos algum C ++ genérico a cada uma das 128 unidades para aumentá-las para mais de 1000 linhas cada. Lembre-se de que cada projeto representa uma carga de trabalho diferente e os resultados em seus próprios projetos podem variar. Diferentes recursos da linguagem C ++ e configurações de projeto podem afetar os tempos de compilação.
Ryzen Imagem cortesia da AMD
As especificações completas na máquina de benchmark AMD Ryzen 9 5950x são AMD Ryzen 9 5950x, 64 GB DDR4 3200 MHz de RAM, 1 TB NVMe SSD + 2 TB HDD, NVIDIA GeForce RTX 3070 8 GB e Windows 10 Pro. Para monitorar o uso da CPU e do disco IO da compilação paralela do C ++ Builder, usei o Gerenciador de Tarefas DeLuxe ou TMX (que também é construído em Delphi). O Task Manager DeLuxe é incrível pela quantidade de informações que fornece sobre o seu sistema Windows. TMX está disponível no MiTeC que também produz uma grande variedade de componentes Delphi que fornecem acesso a muitas das mesmas informações encontradas no TMX. Abaixo está a visualização de 32 threads de CPU que a TMX oferece. Eu fiz esta captura de tela durante a compilação normal do C ++ Builder síncrono. Você pode ver na imagem que ele está realmente usando apenas um único núcleo simultaneamente para a compilação.
A seguir, vamos dar uma olhada na captura de tela do Task Manager DeLuxe logo após a compilação paralela do C ++ usando TwineCompile no C ++ Builder. Você verá nesta imagem que ele usa todos os tópicos para a compilação. Você pode ver como ele usou todos os 32 threads e o TMX também oferece um prático monitor de velocidade do clock da CPU, já que o turbo AMD Ryzen 9 5950x aumenta para 4,9 GHz (apenas 4,2 GHz na imagem). Uma coisa interessante a notar aqui é que, como o turbo que aumenta de 3,9 Ghz para 4,9 Ghz não é consistente, os benchmarks mudam alguns segundos a cada execução.
Se você quiser saber mais sobre a arquitetura de CPU AMD Ryzen 9 5950x, a AMD tem um ótimo vídeo onde explica a arquitetura Zen 3.
Vamos fazer a comparação dos números. Existem vários tipos diferentes de compilações que podem ser feitas no C ++ Builder. Isso inclui uma compilação de depuração (-O0) e uma compilação de lançamento. Na versão Release, diferentes sinalizadores de otimização podem ser selecionados (-O1, -O2 e -O3). Cada sinalizador possui um alvo de otimização diferente. -O1 gera o menor código possível, -O2 gera o código mais rápido possível e -O3 gera o código mais otimizado. De acordo com a Embarcadero -O3 oferece melhorias de velocidade de até duas vezes o desempenho de -O2.
As compilações de depuração são as mais rápidas dos quatro níveis de otimização. Isso faz a diferença principalmente ao usar a compilação normal porque demorou um minuto a mais para as compilações de Release do que as compilações de Debug. Ao usar a compilação paralela, o processo de construção foi tão rápido no modo de depuração e no modo de liberação que quase não importou, pois todas as pontuações estão muito próximas. O primeiro gráfico aqui é a compilação de depuração C ++ normal (-O0) chegando em 396 segundos versus a compilação de depuração C ++ paralela (-O0) chegando em 33 segundos (12X mais rápido!). Se executarmos os números em linhas de código por segundo, obteremos cerca de 7.696 linhas de código por segundo usando o TwineCompile paralelo para -O0. A compilação síncrona de depuração normal -O0 vem com 641 linhas por segundo para compilar.
No segundo gráfico, temos a versão Normal C ++ Release (-O1) chegando a 404 segundos versus a versão Parallel C ++ Release (-O1) chegando a 32 segundos (~ 12X mais rápido!). Os segundos de construção paralela variam dependendo da velocidade atual do turbo boost (em qualquer lugar entre 3,9 Ghz e 4,9 Ghz). Se executarmos os números em linhas de código por segundo, obteremos cerca de 7.937 linhas de código por segundo usando o TwineCompile paralelo para -O1. A compilação síncrona normal -O1 vem em 628 linhas por segundo para compilar.
No terceiro gráfico, temos a versão Normal C ++ Release (-O2) chegando em 449 segundos versus a versão Parallel C ++ Release Build (-O2) chegando em 37 segundos (~ 12X mais rápido!). Os segundos de construção paralela variam dependendo da velocidade atual do turbo boost (em qualquer lugar entre 3,9 Ghz e 4,9 Ghz). Se executarmos os números em linhas de código por segundo, obteremos cerca de 6.864 linhas de código por segundo usando o TwineCompile paralelo para -O2. O build normal de -O2 síncrono chega a 565 linhas por segundo para compilar.
No quarto e último gráfico, temos a versão Normal C ++ Release (-O3) chegando a 450 segundos versus a versão Parallel C ++ Release Build (-O3) chegando a 36 segundos (~ 12X mais rápido!). Os segundos de construção paralela variam dependendo da velocidade atual do turbo boost (em qualquer lugar entre 3,9 Ghz e 4,9 Ghz). Eu vi entre 36 segundos e 40 segundos aqui. Se executarmos os números em linhas de código por segundo, obteremos cerca de 7.055 linhas de código por segundo usando o TwineCompile paralelo para -O3. A compilação síncrona normal -O3 vem com 564 linhas por segundo para compilar.
Basta dizer que o aumento da produtividade por ter compilação paralela é significativo. Ser capaz de compilar um aplicativo C ++ grande em cerca de 30 segundos permite que você itere mais rápido (semelhante à velocidade de iteração que pode ser feita no Delphi) porque os tempos de compilação são muito rápidos. Eu considero 128 formulários e ~ 254k linhas de código para um projeto do Windows ser grande. Certamente não é um projeto pequeno (2-3 formulários) e certamente não é um projeto enorme (milhões e milhões de linhas de código).
Agora vamos comparar o compilador Delphi 10.4.1 com a compilação C ++ Builder Parallel. No nosso primeiro blognesta série, uma CPU AMD Ryzen 9 5950x compila código Object Pascal pesado de genéricos em cerca de 61.500 linhas por segundo, que pode ser extrapolado para 1 milhão de linhas de código Object Pascal pesado de genéricos em 16 segundos. A compilação paralela mais rápida do C ++ Builder (-O1) compila 7.937 linhas de código por segundo, que podem ser extrapoladas para 1 milhão de linhas de C ++ em aproximadamente 126 segundos. A mesma compilação C ++ síncrona do C ++ Builder -O1 tinha 628 linhas de código por segundo, que podem ser extrapoladas para 1 milhão de linhas de código C ++ em 1592 segundos. Como você pode ver, a compilação paralela do C ++ Builder se aproxima da produtividade do Delphi com velocidades de compilação, pois é ordens de magnitude mais rápida do que a compilação normal.O C ++ Builder com compilação paralela em hardware moderno por meio do TwineCompile pode aproximar você da produtividade do Delphi com a velocidade e o poder do C ++ para seus aplicativos Windows.
O hardware moderno é, e o AMD Ryzen 9 5950x é ótimo com seus 16 núcleos e 32 threads, mas o CPU Ryzen 9 5950x é realmente difícil de conseguir no momento. Que tal usar o TwineCompile em uma máquina mais antiga? Na verdade, tenho usado um i7-3770 com 4 núcleos e 8 threads nos últimos 8 anos como meu driver diário. As especificações desta máquina são aproximadamente Intel i7-3770, 16 GB de RAM, SSD de 1 TB, Windows 10 Home. Sua pontuação de benchmark de CPU para um único thread é 2069 contra 3515 no 5950x. A única atualização que eu realmente fiz em 8 anos foi colocar um SSD Samsung 860 EVO de 1 TB e isso fez uma grande diferença nos tempos de compilação. Usei o Gerenciador de Tarefas DeLuxe novamente e tirei screenshots da compilação normal e da compilação paralela na máquina de 8 threads i7-3770. Primeiro, mostraremos uma compilação normal no C ++ Builder. Como você verá na imagem, ele usa apenas cerca de 30% da CPU para compilar o código C ++.
A seguir, vamos dar uma olhada na máquina i7-3770 novamente, desta vez usando o C ++ Builder, compilando paralelamente o mesmo projeto de 128 formulários e cerca de 254.000 linhas de código. Como você verá desta vez, ele está atingindo todos os 4 núcleos e 8 threads e usando todo o poder da máquina para compilar.
Vamos ver alguns números dessa máquina ao compilar o mesmo projeto C ++ Builder de 128 formulários de forma síncrona e em paralelo. O primeiro gráfico aqui é a compilação de depuração C ++ normal (-O0) chegando a 1023 segundos versus a compilação de depuração C ++ paralela (-O0) chegando a 170 segundos (6X mais rápido!). Se executarmos os números em linhas de código por segundo, obteremos cerca de 1494 linhas de código por segundo usando o TwineCompile paralelo para -O0. A compilação normal de depuração síncrona -O0 vem com 248 linhas por segundo para compilar.
O segundo gráfico aqui é a versão normal do C ++ (-O2) chegando a 935 segundos versus a versão paralela do C ++ (-O2) chegando a 142 segundos (~ 6X mais rápido!). Se executarmos os números em linhas de código por segundo, obteremos cerca de 1788 linhas de código por segundo usando o TwineCompile paralelo para -O2. A compilação de -O2 síncrona de depuração normal vem com 271 linhas por segundo para compilar. Uma coisa interessante que vejo aqui é que na máquina AMD Ryzen 9 3950x as compilações de depuração foram mais rápidas do que as compilações de lançamento, onde, como na máquina mais antiga, as compilações de depuração são lentas. Não tenho nenhum número fixo, mas acho que isso pode ser devido a compilações de depuração serem maiores do que as compilações de lançamento e, portanto, a velocidade do disco rígido de estado sólido entra em jogo.
Como você pode ver, mesmo em hardware mais antigo, a compilação paralela do C ++ Builder fornece um ENORME aumento de produtividade com tempos de compilação muito mais rápidos. Se você tem uma máquina mais antiga e não está executando um SSD como o Samsung 860 EVO, essa é uma atualização fácil para obter um desempenho muito melhor em um disco rígido normal. Ou se você estiver executando uma máquina mais antiga que não seja pelo menos quad code, você pode adquirir máquinas quad core mais antigas por um custo relativamente baixo.
Em qualquer caso, independentemente do hardware que você está executando (desde que tenha pelo menos 2 núcleos), você verá um aumento significativo no tempo de compilação para seus projetos C ++ ao usar o C ++ Builder mais recente com compilação paralela por meio do TwineCompile. Nesta postagem do blog, comparamos o último AMD Ryzen 9 5950x com seus 16 núcleos e 32 threads e mostramos de forma conclusiva que ele pode fazer uma grande diferença para aumentar sua produtividade por meio da velocidade de iteração. Um projeto Windows C ++ relativamente grande com 128 formulários e mais de 254.000 linhas de código pode ser compilado em cerca de 30-40 segundos por meio da compilação paralela com 16 núcleos e 32 threads. Isso é incrivel. Uma máquina mais antiga usando compilação síncrona normal levou entre ~ 15 minutos e ~ 17 minutos para o mesmo projeto!
Agora é um ótimo momento para ser um desenvolvedor C ++ para criar aplicativos do Windows (e iOS) em C ++. Vimos como um único núcleo em hardware mais antigo pode levar 60 minutos para compilar um projeto C ++ com 1 milhão de linhas de código, o que agora leva apenas cerca de 2 minutos usando compilação paralela em hardware moderno! A compilação paralela traz a produtividade necessária para o desenvolvimento em C ++ sem sacrificar a velocidade e o poder do desempenho de tempo de execução do C ++. C ++ Builder 10.4.1+ é a ferramenta que pode levar você até lá.
El AMD Ryzen 9 5950x recientemente lanzado ofrece 16 núcleos y 32 subprocesos, así que veamos qué tipo de rendimiento podemos obtener de una compilación paralela de C ++ con esos 32 subprocesos. En el momento de redactar este artículo, AMD Ryzen 9 5950x tiene el puntaje de referencia de CPU de un solo núcleo más alto en alrededor de 3515. C ++ Builder es una herramienta de desarrollo de aplicaciones rápido para crear aplicaciones C ++ de Windows. Ofrece compilación normal y en la versión más reciente incluye un complemento llamado TwineCompile que usará los 32 subprocesos de la central eléctrica Ryzen 5950x para compilar múltiples archivos en el proyecto C ++ simultáneamente. Hicimos dos publicaciones anteriores en las que comparamos el 5950x con ~ 750k líneas de código compilado en Delphi y construimos en paralelo 300 aplicaciones nativas de Windows en Delphi.
El proyecto que utilicé para probar la compilación paralela de C ++ es una gran aplicación C ++ de Windows con 128 formularios y, según C ++ Builder, ~ 254.000 líneas de C ++. Los formularios se toman de los 50 formularios de proyectos que se encuentran en este repositorio de ejemplos multiplataforma de C ++ . Usamos las 50 formas 2 y 3 veces para llegar al número 128. Originalmente, creamos este proyecto para comparar el AMD Ryzen Threadripper 3990x que tiene 64 núcleos y 128 hilos. En cualquier caso, una vez que tuvimos 128 formularios en el proyecto, agregamos algo de C ++ genérico a cada una de las 128 unidades para llevarlas a más de 1000 líneas cada una. Tenga en cuenta que cada proyecto es una carga de trabajo diferente y los resultados en sus propios proyectos pueden variar. Diferentes características del lenguaje C ++ y configuraciones de proyectos pueden afectar los tiempos de compilación.
Ryzen Image courtesy of AMD
Las especificaciones completas de la máquina de referencia AMD Ryzen 9 5950x son AMD Ryzen 9 5950x, 64GB DDR4 3200MHz RAM, 1TB NVMe SSD + 2TB HDD, NVIDIA GeForce RTX 3070 8GB y Windows 10 Pro. Para monitorear el uso de CPU y Disk IO de la compilación paralela de C ++ Builder, utilicé Task Manager DeLuxe o TMX (que también está integrado en Delphi). Task Manager DeLuxe es bastante sorprendente por la cantidad de información que proporciona sobre su sistema Windows. TMX está disponible en MiTeC que también hace una amplia variedad de componentes Delphi que le dan acceso a mucha de la misma información que se encuentra en TMX. A continuación se muestra la vista de subprocesos de 32 CPU que proporciona TMX. Tomé esta captura de pantalla durante la compilación normal sincrónica de C ++ Builder. Puede ver en la captura de pantalla que en realidad solo está usando un solo núcleo simultáneamente para la compilación.
A continuación, echemos un vistazo a la captura de pantalla del Administrador de tareas DeLuxe poco después de la compilación paralela de C ++ utilizando TwineCompile en C ++ Builder. Verá en esta captura de pantalla que usa todos los hilos para la compilación. Puede ver cómo usó los 32 subprocesos y TMX también proporciona un práctico monitor de velocidad de reloj de la CPU, ya que el AMD Ryzen 9 5950x turbo aumenta hasta 4.9Ghz (solo 4.2Ghz en la captura de pantalla). Una cosa interesante a tener en cuenta aquí es que debido a que el aumento del turbo de 3.9 Ghz a 4.9 Ghz no es consistente, los puntos de referencia cambian unos segundos en cada ejecución.
Si quieres saber más sobre la arquitectura de la CPU AMD Ryzen 9 5950x, AMD tiene un gran video donde explican la arquitectura Zen 3.
Vayamos a la comparación de los números. Hay varios tipos diferentes de compilaciones que se pueden realizar en C ++ Builder. Esto incluye una compilación de depuración (-O0) y una compilación de lanzamiento. En la versión Release se pueden seleccionar diferentes indicadores de optimización (-O1, -O2 y -O3). Cada bandera tiene un objetivo de optimización diferente. -O1 genera el código más pequeño posible, -O2 genera el código más rápido posible y -O3 genera el código más optimizado. Según Embarcadero -O3 ofrece mejoras de velocidad de hasta el doble del rendimiento de -O2.
Las compilaciones de depuración son las más rápidas de los cuatro niveles de optimización. Esto marca la diferencia principalmente cuando se usa la compilación normal porque las compilaciones de la versión demoraron hasta un minuto más que las compilaciones de depuración. Al usar la compilación en paralelo, el proceso de compilación fue tan rápido tanto en el modo de depuración como en el de lanzamiento, que apenas importó, ya que todas las puntuaciones están bastante juntas. El primer gráfico aquí es la compilación de depuración de C ++ normal (-O0) que llega a los 396 segundos frente a la compilación de depuración de C ++ paralela (-O0) que llega a los 33 segundos (¡12 veces más rápido!). Si ejecutamos los números en líneas de código por segundo, obtenemos alrededor de 7,696 líneas de código por segundo usando TwineCompile paralelo para -O0. La compilación normal de depuración síncrona -O0 llega a 641 líneas por segundo para compilar.
En el segundo gráfico, tenemos la versión normal de C ++ Release (-O1) que llega a los 404 segundos frente a la Parallel C ++ Release Build (-O1) que llega a los 32 segundos (¡~ 12 veces más rápido!). Los segundos de construcción en paralelo varían según la velocidad actual del turbo boost (en cualquier lugar entre 3.9 Ghz y 4.9 Ghz). Si ejecutamos los números en líneas de código por segundo, obtenemos alrededor de 7937 líneas de código por segundo usando TwineCompile paralelo para -O1. La compilación normal sincrónica de -O1 llega a 628 líneas por segundo para compilar.
En el tercer gráfico, tenemos la versión normal de C ++ Release (-O2) con 449 segundos frente a la versión Parallel C ++ Release (-O2) con 37 segundos (¡~ 12 veces más rápido!). Los segundos de construcción en paralelo varían según la velocidad actual del turbo boost (en cualquier lugar entre 3.9 Ghz y 4.9 Ghz). Si ejecutamos los números en líneas de código por segundo, obtenemos alrededor de 6,864 líneas de código por segundo usando TwineCompile paralelo para -O2. La compilación de -O2 síncrona normal llega a 565 líneas por segundo para compilar.
En el cuarto y último gráfico, tenemos la versión normal de C ++ Release (-O3) que llega a los 450 segundos frente a la Parallel C ++ Release Build (-O3) que llega a los 36 segundos (¡~ 12 veces más rápido!). Los segundos de construcción en paralelo varían según la velocidad actual del turbo boost (en cualquier lugar entre 3.9 Ghz y 4.9 Ghz). Vi entre 36 segundos y 40 segundos aquí. Si ejecutamos los números en líneas de código por segundo, obtenemos alrededor de 7.055 líneas de código por segundo usando TwineCompile paralelo para -O3. La compilación de -O3 sincrónica normal llega a 564 líneas por segundo para compilar.
Basta decir que el aumento de la productividad al tener una compilación en paralelo es significativo. Ser capaz de compilar una gran aplicación C ++ en unos 30 segundos le permite iterar más rápido (similar a la velocidad de iteración que se puede hacer en Delphi) porque los tiempos de compilación son muy rápidos. Califico 128 formularios y ~ 254k líneas de código para que un proyecto de Windows sea grande. Ciertamente no es un proyecto pequeño (2-3 formularios) y ciertamente no es un proyecto masivo (millones y millones de líneas de código).
Ahora comparemos el compilador Delphi 10.4.1 con el compilador C ++ Builder Parallel. En nuestro primer blogEn esta serie, una CPU AMD Ryzen 9 5950x compila código Object Pascal pesado genérico a alrededor de 61,500 líneas por segundo que se puede extrapolar a 1 millón de líneas de código Object Pascal pesado genérico en 16 segundos. La compilación paralela de C ++ Builder (-O1) más rápida compila 7937 líneas de código por segundo que se pueden extrapolar a 1 millón de líneas de C ++ en ~ 126 segundos. La misma compilación de C ++ síncrona de C ++ Builder -O1 fue de 628 líneas de código por segundo que se pueden extrapolar a 1 millón de líneas de código C ++ en 1592 segundos. Como puede ver, la compilación paralela de C ++ Builder se acerca a la productividad de Delphi con velocidades de compilación, ya que es órdenes de magnitud más rápida que la compilación normal.C ++ Builder con compilación paralela en hardware moderno a través de TwineCompile puede acercarlo a la productividad de Delphi con la velocidad y potencia de C ++ para sus aplicaciones de Windows.
El hardware moderno es y el AMD Ryzen 9 5950x es excelente con sus 16 núcleos y 32 subprocesos, pero la CPU Ryzen 9 5950x es realmente difícil de conseguir en este momento. ¿Qué pasa con el uso de TwineCompile en una máquina más antigua? De hecho, he estado usando un i7-3770 con 4 núcleos y 8 hilos durante los últimos 8 años como mi controlador diario. Las especificaciones de esta máquina son aproximadamente un Intel i7-3770, 16GB RAM, 1TB SSD, Windows 10 Home. Su puntuación de referencia de CPU para un solo hilo es 2069 frente a 3515 en el 5950x. La única actualización que realmente le hice en 8 años fue instalar un SSD Samsung 860 EVO 1TB y eso ha marcado una gran diferencia con los tiempos de compilación. Usé Task Manager DeLuxe nuevamente y tomé capturas de pantalla de la compilación normal y la compilación paralela en la máquina i7-3770 de 8 hilos. Primero mostraremos una compilación normal en C ++ Builder. Como verá en la captura de pantalla, solo está usando alrededor del 30% de la CPU para compilar el código C ++.
A continuación, echemos un vistazo a la máquina i7-3770 nuevamente, esta vez usando C ++ Builder compilando en paralelo el mismo proyecto de 128 formularios y alrededor de 254,000 líneas de código. Como verá, esta vez está alcanzando los 4 núcleos y 8 subprocesos y utilizando toda la potencia de la máquina para compilar.
Veamos algunos números de esta máquina al compilar el mismo proyecto de C ++ Builder en formato 128 de forma sincrónica y en paralelo. El primer gráfico aquí es la compilación de depuración de C ++ normal (-O0) que llega a los 1023 segundos frente a la compilación de depuración de C ++ paralela (-O0) que llega a los 170 segundos (¡6 veces más rápido!). Si ejecutamos los números en líneas de código por segundo, obtenemos alrededor de 1494 líneas de código por segundo utilizando TwineCompile paralelo para -O0. La compilación normal de depuración síncrona -O0 llega a 248 líneas por segundo para compilar.
El segundo gráfico aquí es la versión normal de C ++ Release (-O2) que llega a los 935 segundos frente a la versión Parallel C ++ Release Build (-O2) que llega a los 142 segundos (¡~ 6 veces más rápido!). Si ejecutamos los números en líneas de código por segundo, obtenemos alrededor de 1788 líneas de código por segundo usando TwineCompile paralelo para -O2. La compilación normal de depuración síncrona -O2 llega a 271 líneas por segundo para compilar. Una cosa interesante que veo aquí es que en la máquina AMD Ryzen 9 3950x las compilaciones de depuración fueron más rápidas que las compilaciones de lanzamiento, mientras que en la máquina más antigua aquí las compilaciones de depuración son lentas. No tengo números concretos, pero supongo que esto podría deberse a que las versiones de depuración son más grandes que las versiones de lanzamiento y, por lo tanto, entra en juego la velocidad del disco duro de estado sólido.
Como puede ver incluso en hardware más antiguo, la compilación paralela de C ++ Builder proporciona un ENORME aumento de la productividad con tiempos de compilación mucho más rápidos. Si tiene una máquina más antigua y no está ejecutando un SSD como el Samsung 860 EVO, es una actualización fácil para lograr un rendimiento mucho mejor que un disco duro normal. O si está ejecutando una máquina más antigua que no es al menos de código cuádruple, puede elegir máquinas de cuatro núcleos más antiguas a un costo relativamente bajo.
En cualquier caso, independientemente del hardware que esté ejecutando (siempre que tenga al menos 2 núcleos), verá un aumento significativo del tiempo de compilación para sus proyectos de C ++ cuando utilice el último C ++ Builder con compilación en paralelo a través de TwineCompile. En esta publicación de blog, comparamos el último AMD Ryzen 9 5950x con sus 16 núcleos y 32 subprocesos y hemos demostrado de manera concluyente que puede marcar una gran diferencia para aumentar su productividad a través de la velocidad de iteración. Un proyecto de Windows C ++ relativamente grande con 128 formularios y más de 254.000 líneas de código se puede compilar en unos 30-40 segundos a través de la compilación en paralelo con 16 núcleos y 32 hilos. Eso es increíble. Una máquina más antigua que usa la compilación sincrónica normal tomó entre ~ 15 minutos y ~ 17 minutos para el mismo proyecto.
Ahora es un buen momento para ser un desarrollador de C ++ para crear aplicaciones de Windows (e iOS) en C ++. ¡Hemos visto cómo un solo núcleo en hardware antiguo puede tardar 60 minutos en compilar un proyecto C ++ con 1 millón de líneas de código que ahora solo toma ~ 2 minutos usando la compilación paralela en hardware moderno! La compilación en paralelo aporta una productividad muy necesaria al desarrollo de C ++ sin sacrificar la velocidad y la potencia del rendimiento en tiempo de ejecución de C ++. C ++ Builder 10.4.1+ es la herramienta que puede llevarlo allí.
Der kürzlich veröffentlichte AMD Ryzen 9 5950x bietet 16 Kerne und 32 Threads. Lassen Sie uns also sehen, welche Leistung wir aus einer parallelen C++ – Kompilierung mit diesen 32 Threads ziehen können. Zum Zeitpunkt dieses Schreibens weist der AMD Ryzen 9 5950x mit rund 3515 den höchsten Single-Core- CPU-Benchmark- Wert auf. C++Builder ist ein schnelles Tool zur Anwendungsentwicklung zum Erstellen von C++ – Windows-Apps. Es bietet normale Kompilierung und enthält in der neuesten Version ein Add-On namens TwineCompile , das alle 32 Threads des Ryzen 5950x-Kraftpakets verwendet, um mehrere Dateien im C++ – Projekt gleichzeitig zu kompilieren. Wir haben zwei frühere Beiträge verfasst, in denen wir den 5950x mit ~ 750.000 Codezeilen verglichen haben, die in Delphi kompiliert wurden, und 300 native Windows-Apps in Delphi parallel erstellt haben.
Das Projekt, das ich zum Testen der parallelen C++ – Kompilierung verwendet habe, ist eine große C++ – Windows-App mit 128 Formularen und laut C++Builder ~ 254.000 Zeilen C++. Die Formulare stammen aus den 50 Projektformularen in diesem C++ Cross Platform Samples- Repository. Wir haben die 50 Formulare 2 und 3 Mal verwendet, um zur 128-Nummer zu gelangen. Ursprünglich haben wir dieses Projekt gebaut, um den AMD Ryzen Threadripper 3990x mit 64 Kernen und 128 Threads zu bewerten. Sobald wir 128 Formulare im Projekt hatten, fügten wir jeder der 128 Einheiten generisches C++ hinzu, um sie auf jeweils über 1000 Zeilen zu bringen. Beachten Sie, dass jedes Projekt eine andere Arbeitsbelastung aufweist und die Ergebnisse in Ihren eigenen Projekten variieren können. Verschiedene C++ – Sprachfunktionen und Projektkonfigurationen können sich auf die Kompilierungszeiten auswirken.
Ryzen Bild mit freundlicher Genehmigung von AMD
Die vollständigen technischen Daten des AMD Ryzen 9 5950x-Benchmark-Computers sind AMD Ryzen 9 5950x, 64 GB DDR4 3200 MHz RAM, 1 TB NVMe SSD + 2 TB Festplatte, NVIDIA GeForce RTX 3070 8 GB und Windows 10 Pro. Um die CPU- und Festplatten-E / A-Nutzung der parallelen Kompilierung von C ++ Builder zu überwachen, habe ich Task Manager DeLuxe oder TMX (ebenfalls in Delphi integriert) verwendet. Der Task-Manager DeLuxe bietet eine erstaunliche Menge an Informationen zu Ihrem Windows-System. TMX ist bei MiTeC erhältlich Dadurch werden auch eine Vielzahl von Delphi-Komponenten hergestellt, mit denen Sie auf viele der gleichen Informationen zugreifen können, die in TMX enthalten sind. Unten finden Sie die 32-CPU-Thread-Ansicht, die TMX bietet. Ich habe diesen Screenshot während der normalen synchronen C++Builder-Kompilierung gemacht. Sie können im Screenshot sehen, dass wirklich nur ein einziger Kern gleichzeitig für die Kompilierung verwendet wird.
Schauen wir uns als nächstes den Screenshot aus dem Task Manager DeLuxe kurz nach der parallelen C++-Kompilierung mit TwineCompile in C++Builder an. Sie werden in diesem Screenshot sehen, dass er alle Threads für die Kompilierung verwendet. Sie können sehen, wie alle 32 Threads verwendet wurden. TMX bietet auch eine praktische Überwachung der CPU-Taktfrequenz, da der AMD Ryzen 9 5950x Turbo-Boost bis zu 4,9Ghz erreicht (im Screenshot nur 4,2Ghz). Interessant ist hier, dass sich die Benchmarks bei jedem Durchlauf um ein paar Sekunden ändern, da das Turbo-Boosting von 3,9 Ghz auf 4,9 Ghz nicht konsistent ist.
Wenn Sie mehr über die AMD Ryzen 9 5950x CPU-Architektur erfahren möchten, hat AMD ein großartiges Video, in dem sie die Zen 3-Architektur erklären.
Kommen wir nun zum Vergleich der Zahlen. Es gibt eine Reihe von verschiedenen Arten von Builds, die in C++Builder durchgeführt werden können. Dazu gehören ein Debug-Build (-O0) und ein Release-Build. Beim Release-Build können verschiedene Optimierungsflags ausgewählt werden (-O1, -O2 und -O3). Jedes Flag hat ein anderes Optimierungsziel. -O1 erzeugt den kleinstmöglichen Code, -O2 erzeugt den schnellstmöglichen Code und -O3 erzeugt den am meisten optimierten Code. Laut Embarcadero bringt -O3 Geschwindigkeitsverbesserungen von bis zu doppelt so viel Leistung wie -O2.
Die Debug-Builds sind die schnellsten der vier Optimierungsstufen. Dies macht sich vor allem beim normalen Kompilieren bemerkbar, da die Release-Builds bis zu einer Minute länger brauchen als die Debug-Builds. Bei Verwendung der parallelen Kompilierung war der Build-Prozess sowohl im Debug- als auch im Release-Modus so schnell, dass es kaum eine Rolle spielt, da die Ergebnisse alle ziemlich nah beieinander liegen. Das erste Diagramm hier zeigt den normalen C++-Debug-Build (-O0) mit 396 Sekunden im Vergleich zum parallelen C++-Debug-Build (-O0) mit 33 Sekunden (12-mal schneller!). Wenn wir die Zahlen auf Codezeilen pro Sekunde umrechnen, erhalten wir etwa 7.696 Codezeilen pro Sekunde, wenn wir das parallele TwineCompile für -O0 verwenden. Der normale Debug-Synchron-Build für -O0 kommt auf 641 Zeilen pro Sekunde zum Kompilieren.
Im zweiten Diagramm haben wir den normalen C++ Release Build (-O1) mit 404 Sekunden gegenüber dem parallelen C++ Release Build (-O1) mit 32 Sekunden (~12X schneller!). Die Sekunden für den parallelen Build variieren je nach aktueller Geschwindigkeit des Turbo Boosts (irgendwo zwischen 3,9 Ghz und 4,9 Ghz). Wenn wir die Zahlen auf Codezeilen pro Sekunde umrechnen, erhalten wir etwa 7.937 Codezeilen pro Sekunde, wenn wir das parallele TwineCompile für -O1 verwenden. Der normale synchrone -O1-Build kommt auf 628 Zeilen pro Sekunde zum Kompilieren.
Im dritten Diagramm sehen wir den normalen C++ Release Build (-O2) mit 449 Sekunden gegenüber dem parallelen C++ Release Build (-O2) mit 37 Sekunden (~12X schneller!). Die Sekunden für den parallelen Build variieren je nach aktueller Geschwindigkeit des Turbo Boosts (irgendwo zwischen 3,9 Ghz und 4,9 Ghz). Wenn wir die Zahlen auf Codezeilen pro Sekunde umrechnen, erhalten wir etwa 6.864 Codezeilen pro Sekunde mit dem parallelen TwineCompile für -O2. Der normale synchrone -O2-Build kommt auf 565 Zeilen pro Sekunde zum Kompilieren.
Im vierten und letzten Diagramm haben wir den normalen C++ Release Build (-O3) mit 450 Sekunden gegenüber dem parallelen C++ Release Build (-O3) mit 36 Sekunden (~12X schneller!). Die Sekunden des parallelen Builds variieren je nach aktueller Geschwindigkeit des Turbo Boosts (irgendwo zwischen 3,9 Ghz und 4,9 Ghz). Ich habe hier zwischen 36 Sekunden und 40 Sekunden gesehen. Wenn wir die Zahlen auf Codezeilen pro Sekunde umrechnen, erhalten wir etwa 7.055 Codezeilen pro Sekunde, wenn wir das parallele TwineCompile für -O3 verwenden. Der normale synchrone -O3-Build kommt auf 564 Zeilen pro Sekunde zum Kompilieren.
Es genügt zu sagen, dass der Produktivitätsschub durch die parallele Kompilierung signifikant ist. Wenn man in der Lage ist, eine große C++-Anwendung in etwa 30 Sekunden zu kompilieren, kann man schneller iterieren (ähnlich der Iterationsgeschwindigkeit, die man in Delphi erreichen kann), weil die Kompilierzeiten so schnell sind. Ich bezeichne 128 Formulare und ~254k Zeilen Code für ein Windows-Projekt als groß. Es ist sicherlich kein kleines Projekt (2-3 Formulare) und es ist sicherlich kein großes Projekt (Millionen und Abermillionen von Codezeilen).
Vergleichen wir nun den Delphi 10.4.1-Compiler mit dem C++Builder Parallel-Compiler. In unserem ersten Blog in dieser Serie kompiliert eine AMD Ryzen 9 5950x CPU generiklastigen Object Pascal Code mit etwa 61.500 Zeilen pro Sekunde, was auf 1 Million Zeilen generiklastigen Object Pascal Code in 16 Sekunden hochgerechnet werden kann. Der schnellste parallele C++Builder-Build (-O1) kompiliert 7.937 Zeilen Code pro Sekunde, was auf 1 Million Zeilen C++ in ~126 Sekunden extrapoliert werden kann. Die gleiche synchrone C++-Kompilierung mit C++Builder -O1 betrug 628 Codezeilen pro Sekunde, was auf 1 Million C++-Codezeilen in 1592 Sekunden hochgerechnet werden kann. Wie Sie sehen können, nähert sich die parallele Kompilierung in C++Builder der Produktivität von Delphi an, da die Kompiliergeschwindigkeit um Größenordnungen schneller ist als die normale Kompilierung. C++Builder mit paralleler Kompilierung auf moderner Hardware durch TwineCompile kann Sie nahe an die Produktivität von Delphi mit der Geschwindigkeit und Leistung von C++ für Ihre Windows-Anwendungen bringen.
Moderne Hardware ist und der AMD Ryzen 9 5950x ist mit seinen 16 Kernen und 32 Threads großartig, aber die Ryzen 9 5950x CPU ist im Moment tatsächlich schwer zu bekommen. Wie sieht es mit der Verwendung von TwineCompile auf einer älteren Maschine aus? Ich habe in den letzten 8 Jahren einen i7-3770 mit 4 Kernen und 8 Threads als meinen täglichen Fahrer benutzt. Die Spezifikationen dieses Rechners sind ungefähr ein Intel i7-3770, 16GB RAM, 1TB SSD, Windows 10 Home. Sein CPU-Benchmark-Ergebnis für einen einzelnen Thread liegt bei 2069 im Vergleich zu 3515 beim 5950x. Das einzige Upgrade, das ich in den letzten 8 Jahren vorgenommen habe, war der Einbau einer Samsung 860 EVO 1TB SSD und das hat einen großen Unterschied bei den Kompilierzeiten gemacht. Ich habe wieder den Task Manager DeLuxe verwendet und Screenshots des normalen Kompilierens und des parallelen Kompilierens auf der i7-3770 8-Thread-Maschine gemacht. Zuerst zeigen wir einen normalen Kompiliervorgang in C++Builder. Wie Sie im Screenshot sehen werden, werden nur etwa 30 % der CPU für die Kompilierung des C++-Codes verwendet.
Als Nächstes werfen wir einen Blick auf den i7-3770-Rechner, diesmal unter Verwendung von C++Builder beim parallelen Kompilieren desselben 128-Form-Projekts und etwa 254.000 Codezeilen. Wie Sie sehen werden, werden dieses Mal alle 4 Kerne und 8 Threads angesprochen und die volle Leistung der Maschine zum Kompilieren genutzt.
Sehen wir uns einige Zahlen aus dieser Maschine an, wenn wir das gleiche 128-Form-C++Builder-Projekt synchron und parallel kompilieren. Das erste Diagramm zeigt den normalen C++-Debug-Build (-O0) mit 1023 Sekunden im Vergleich zum parallelen C++-Debug-Build (-O0) mit 170 Sekunden (6-mal schneller!). Wenn wir die Zahlen auf Codezeilen pro Sekunde umrechnen, erhalten wir etwa 1494 Codezeilen pro Sekunde, wenn wir das parallele TwineCompile für -O0 verwenden. Der normale synchrone Debug -O0 Build kommt auf 248 Zeilen pro Sekunde zum Kompilieren.
Das zweite Diagramm hier zeigt den normalen C++ Release Build (-O2) mit 935 Sekunden im Vergleich zum parallelen C++ Release Build (-O2) mit 142 Sekunden (~6x schneller!). Wenn wir die Zahlen auf Codezeilen pro Sekunde umrechnen, erhalten wir etwa 1788 Codezeilen pro Sekunde, wenn wir das parallele TwineCompile für -O2 verwenden. Der normale debug-synchrone -O2-Build kommt auf 271 Zeilen pro Sekunde zum Kompilieren. Eine interessante Sache, die ich hier sehe, ist, dass auf dem AMD Ryzen 9 3950x Rechner die Debug-Builds schneller waren als die Release-Builds, während auf dem älteren Rechner hier die Debug-Builds langsam sind. Ich habe keine harten Zahlen, aber ich würde vermuten, dass dies darauf zurückzuführen ist, dass Debug-Builds größer sind als Release-Builds und daher die Geschwindigkeit der Solid-State-Festplatte ins Spiel kommt.
Wie Sie sehen können, bietet die parallele Kompilierung von C++Builder selbst auf älterer Hardware einen RIESIGEN Produktivitätsschub mit viel schnelleren Kompilierzeiten. Wenn Sie eine ältere Maschine haben und keine SSD wie die Samsung 860 EVO verwenden, ist das ein einfaches Upgrade, um eine viel bessere Leistung als mit einer normalen Festplatte zu erzielen. Oder wenn Sie eine ältere Maschine betreiben, die nicht mindestens Quad-Core ist, können Sie ältere Quad-Core-Maschinen für relativ niedrige Kosten abholen.
In jedem Fall werden Sie unabhängig von der verwendeten Hardware (solange sie mindestens 2 Kerne hat) eine deutliche Steigerung der Kompilierzeit für Ihre C++-Projekte feststellen, wenn Sie den neuesten C++Builder mit paralleler Kompilierung durch TwineCompile verwenden. In diesem Blog-Beitrag haben wir den neuesten AMD Ryzen 9 5950x mit seinen 16 Kernen und 32 Threads einem Benchmarking unterzogen und schlüssig dargelegt, dass dies einen großen Unterschied für die Steigerung Ihrer Produktivität durch Iterationsgeschwindigkeit ausmachen kann. Ein relativ großes Windows C++ Projekt mit 128 Formularen und über 254.000 Zeilen Code kann durch parallele Kompilierung mit 16 Kernen und 32 Threads in etwa 30-40 Sekunden kompiliert werden. Das ist unglaublich. Ein älterer Rechner mit normaler synchroner Kompilierung brauchte für das gleiche Projekt zwischen ~15 Minuten und ~17 Minuten!
Jetzt ist eine großartige Zeit, um als C++-Entwickler Windows- (und iOS-) Anwendungen in C++ zu erstellen. Wir haben gesehen, wie ein einzelner Kern auf älterer Hardware 60 Minuten für die Kompilierung eines C++-Projekts mit 1 Million Codezeilen benötigen konnte, was jetzt mit paralleler Kompilierung auf moderner Hardware nur noch ~2 Minuten dauert! Die parallele Kompilierung bringt dringend benötigte Produktivität in die C++-Entwicklung, ohne die Geschwindigkeit und Leistungsfähigkeit der C++-Laufzeit zu beeinträchtigen. C++Builder 10.4.1+ ist das Werkzeug, mit dem Sie dieses Ziel erreichen können.
В октябре 2020 года Embarcadero спонсировала и выпустила новую версию 6.0 форка Dev-C ++ с улучшениями, которые включали обновленный компилятор GCC 9.2.0 с поддержкой Windows 10 и C ++ 17 / C ++ 20, файлы с высоким DPI, UTF8. и улучшенные значки, и вариант темной темы. Более раннее обновление в июле включало обновление кода Dev-C ++ до Delphi 10.4.
Почему Embarcadero решила обновить Dev-C ++
Dev-C ++ был впервые выпущен в 1998 году Колином Лапласом совместно с Bloodshed Software. Новый форк Orwell Dev-C ++ был выпущен в 2011 году, но обновления прекратились в 2015 году.
Обновление Embarcadero перенесло Dev-C ++ из Delphi 7 в последнюю версию, а также представило новый, более современный интерфейс. Все эти улучшения передали перспективы более быстрой и плавной разработки Windows на C ++ и Delphi в руки разработчиков по всему миру.
В новом техническом документе Embarcadero MVP Эли М. под названием « Embarcadero Dev-C ++: Успешная модернизация популярной среды разработки Windows C ++ » прослеживаются предыстория и реализация проекта модернизации от первоначального плана до новой версии.
Планирование и реализация обновления
Обновление Dev-C ++ потребовало учета ряда факторов, начиная с вопроса о том, будет ли обновление стоить вложенных средств, и будут ли сторонние компоненты, инструменты и библиотеки доступны или должны быть заменены. Также были важны вопросы о том, насколько восприимчивой будет кодовая база к обновлению и насколько проект получит выгоду от обновления.
Переход на Dev-C ++ проходил в два этапа. На первом этапе было внесено наименьшее количество изменений, необходимых для компиляции проекта в последней версии Delphi. На втором этапе были внесены дополнительные изменения, такие как обновление компилятора, поддержка Unicode и полная поддержка Windows 10 с Embarcadero Dev-C ++ 6.0.
Кто участвовал в обновлении?
Координатором группы обновления был Embarcadero MVP с более чем 20-летним опытом, а остальные участники присоединились к проекту со всех концов США, Украины, Мексики и Новой Зеландии. В команду также входили графический дизайнер для нового дизайна интерфейса и инженер по обеспечению качества для проверки функциональности обновления.
Насколько восприимчива кодовая база к обновлению?
Команда обновления измерила гибкость кодовой базы Dev-C ++ с помощью встроенного инструмента Delphi под названием Method Toxicity Metrics. Этот инструмент присваивает оценку токсичности каждой функции, которую он сканирует, и обнаружил, что кодовая база Dev-C ++ восприимчива к обновлению.
Неполный список функций и их оценки токсичности опубликованы в техническом документе « Embarcadero Dev-C ++: Успешная модернизация популярной среды разработки Windows C ++ ».
Были ли сторонние компоненты, инструменты и библиотеки обновленными?
Наиболее важными компонентами обновления Dev-C ++ были SynEdit, основной элемент управления редактора подсветки синтаксиса; FastMM4, настраиваемый менеджер памяти; AStyle, утилита форматирования синтаксиса кода C ++; TDM-GCC 4.9.2 — это настраиваемая группа библиотек для разработки под Windows.
В техническом документе также подробно рассматривается, как команда Embarcadero оценивала эти и другие сторонние компоненты, инструменты и библиотеки с точки зрения модернизации, чтобы оценить жизнеспособность обновления Dev-C ++, и собирала воедино стороннюю установку.
Каковы были основные преимущества обновления для проекта Dev-C ++?
Поддержка стилей VCL и высокого разрешения — это лишь два из основных улучшений, которые Dev-C ++ претерпел в результате проекта модернизации. Dev-C ++ более полезен, эффективен и прост в использовании, он может идти в ногу с разработчиками и развитием C ++.
Em outubro de 2020, a Embarcadero patrocinou e lançou uma nova versão fork 6.0 do Dev-C ++ com melhorias que incluíam um compilador GCC 9.2.0 atualizado com suporte para Windows 10 e C ++ 17 / C ++ 20, alto DPI, arquivos UTF8 e ícones aprimorados e uma opção de tema escuro. Uma atualização anterior em julho apresentou uma atualização do código Dev-C ++ para Delphi 10.4.
Por que a Embarcadero decidiu atualizar Dev-C ++
Dev-C ++ foi lançado pela primeira vez em 1998 por Colin Laplace com o Software Bloodshed. Um novo fork, Orwell Dev-C ++, foi lançado em 2011, mas as atualizações pararam em 2015.
A atualização do Embarcadero migrou o Dev-C ++ do Delphi 7 para a versão mais recente e também introduziu uma interface nova e mais moderna. Todas essas melhorias colocaram a perspectiva de um desenvolvimento mais rápido e suave do Windows em C ++ e Delphi nas mãos de desenvolvedores em todo o mundo.
Um novo white paper do Embarcadero MVP Eli M. intitulado “ Embarcadero Dev-C ++: Modernizando com sucesso um IDE do Windows C ++ popular ” traça o histórico e a implementação do projeto de modernização do plano original até a nova versão.
Planejando e implementando a atualização
A atualização do Dev-C ++ exigiu que vários fatores fossem levados em consideração, começando com a questão de saber se a atualização valeria o investimento e se componentes, ferramentas e bibliotecas de terceiros estariam disponíveis ou teriam de ser substituídos. As questões de quão receptiva a base de código seria para a atualização e quanto o próprio projeto se beneficiaria com uma atualização também foram importantes para responder.
A atualização para Dev-C ++ se desdobrou em duas etapas. O primeiro estágio envolveu fazer o mínimo de alterações necessárias para o projeto compilar na versão mais recente do Delphi. O segundo estágio envolveu mudanças complementares como atualização do compilador, suporte a Unicode e suporte total para Windows 10 com Embarcadero Dev-C ++ 6.0.
Quem esteve envolvido na atualização?
O coordenador da equipe de atualização era um Embarcadero MVP com mais de 20 anos de experiência, enquanto os membros restantes se juntaram ao projeto nos Estados Unidos, Ucrânia, México e Nova Zelândia. A equipe também incluiu um designer gráfico para o novo design de interface e um engenheiro de garantia de qualidade para verificar a funcionalidade da atualização.
Quão receptiva foi a base de código para a atualização?
A equipe de atualização mediu a maleabilidade da base de código Dev-C ++ usando uma ferramenta Delphi integrada chamada Method Toxicity Metrics. Essa ferramenta atribui uma pontuação de toxicidade a cada uma das funções que verifica e descobriu que a base de código Dev-C ++ estava receptiva à atualização.
Uma lista parcial das funções e suas pontuações de toxicidade está publicada no white paper “ Embarcadero Dev-C ++: Modernizando com sucesso um IDE do Windows C ++ popular ”.
Os componentes, ferramentas e bibliotecas de terceiros estavam atualizados?
Os componentes mais importantes na atualização Dev-C ++ foram SynEdit, o controle de editor de destaque de sintaxe central; FastMM4, um gerenciador de memória personalizado; AStyle, um utilitário de formatação de sintaxe de código C ++; e o TDM-GCC 4.9.2 é um grupo personalizado de bibliotecas para desenvolvimento do Windows.
O white paper também analisa em detalhes como a equipe da Embarcadero avaliou esses e outros componentes, ferramentas e bibliotecas de terceiros do ponto de vista da modernização para avaliar a viabilidade da atualização do Dev-C ++ e montou a configuração de terceiros.
Quais foram os principais benefícios da atualização para o projeto Dev-C ++?
Suporte para estilos VCL e alto DPI são apenas duas das principais melhorias que o Dev-C ++ sofreu como resultado do projeto de modernização. Dev-C ++ é mais útil, poderoso e fácil de usar, capaz de acompanhar os desenvolvedores e a evolução do C ++.
En octubre de 2020, Embarcadero patrocinó y lanzó una nueva versión 6.0 de Dev-C ++ con mejoras que incluían un compilador GCC 9.2.0 actualizado con soporte para Windows 10 y C ++ 17 / C ++ 20, archivos UTF8 de alto DPI e iconos mejorados y una opción de tema oscuro. Una actualización anterior en julio incluyó una actualización del código Dev-C ++ a Delphi 10.4.
Por qué Embarcadero decidió actualizar Dev-C ++
Dev-C ++ fue lanzado por primera vez en 1998 por Colin Laplace con Bloodshed Software. En 2011 se lanzó una nueva bifurcación, Orwell Dev-C ++, pero las actualizaciones se detuvieron en 2015.
La actualización de Embarcadero migró Dev-C ++ de Delphi 7 a la última versión y también introdujo una interfaz nueva y más moderna. Todas estas mejoras colocaron la perspectiva de un desarrollo de Windows más rápido y fluido en C ++ y Delphi en manos de desarrolladores de todo el mundo.
Un nuevo documento técnico del MVP de Embarcadero, Eli M., titulado ” Embarcadero Dev-C ++: Modernización exitosa de un popular IDE de Windows C ++ ” rastrea los antecedentes y la implementación del proyecto de modernización desde el plan original hasta la nueva versión.
Planificación e implementación de la actualización
La actualización de Dev-C ++ requería que se tomaran en cuenta una serie de factores, comenzando con la pregunta de si la actualización valdría la pena la inversión y si los componentes, herramientas y bibliotecas de terceros estarían disponibles o tendrían que ser reemplazados. También era importante responder a las preguntas de cuán receptivo sería el código base a la actualización y cuánto se beneficiaría el proyecto en sí de una actualización.
La actualización a Dev-C ++ se desarrolló en dos etapas. La primera etapa implicó realizar la menor cantidad de cambios necesarios para que el proyecto se compilara en la última versión de Delphi. La segunda etapa involucró cambios complementarios como actualizar el compilador, soporte Unicode y soporte completo para Windows 10 con Embarcadero Dev-C ++ 6.0.
¿Quién participó en la actualización?
El coordinador del equipo de actualización fue un MVP de Embarcadero con más de 20 años de experiencia, mientras que los miembros restantes se unieron al proyecto de todo Estados Unidos, Ucrania, México y Nueva Zelanda. El equipo también incluyó un diseñador gráfico para el diseño de la nueva interfaz y un ingeniero de control de calidad para verificar la funcionalidad de la actualización.
¿Qué tan receptiva fue la base de código para la actualización?
El equipo de actualización midió la maleabilidad de la base de código Dev-C ++ utilizando una herramienta Delphi incorporada llamada Method Toxicity Metrics. Esta herramienta asigna una puntuación de toxicidad a cada una de las funciones que analiza, y descubrió que el código base Dev-C ++ era receptivo a la actualización.
Una lista parcial de las funciones y sus puntuaciones de toxicidad se publica en el documento técnico “ Embarcadero Dev-C ++: Modernizing Successfully a Popular Windows C ++ IDE ”.
¿Se actualizaron los componentes, las herramientas y las bibliotecas de terceros?
Los componentes más importantes en la actualización Dev-C ++ fueron SynEdit, el control del editor de resaltado de sintaxis central; FastMM4, un administrador de memoria personalizado; AStyle, una utilidad de formateo de sintaxis de código C ++; y TDM-GCC 4.9.2 es un grupo personalizado de bibliotecas para el desarrollo de Windows.
El documento técnico también analiza en detalle cómo el equipo de Embarcadero evaluó estos y otros componentes, herramientas y bibliotecas de terceros desde un punto de vista de modernización para medir la viabilidad de la actualización de Dev-C ++ y reconstruyó la configuración de terceros.
¿Cuáles fueron los principales beneficios de la actualización para el proyecto Dev-C ++?
El soporte para estilos VCL y alto DPI son solo dos de las principales mejoras que Dev-C ++ experimentó como resultado del proyecto de modernización. Dev-C ++ es más útil, potente y fácil de usar, capaz de mantenerse al día con los desarrolladores y la evolución de C ++.
Im Oktober 2020 sponserte und veröffentlichte Embarcadero eine neue Fork-Version 6.0 von Dev-C++ mit Verbesserungen, die einen aktualisierten GCC 9.2.0-Compiler mit Unterstützung für Windows 10 und C++17/C++20, hohe DPI, UTF8-Dateien und verbesserte Icons sowie eine dunkle Theme-Option beinhalteten. Ein früheres Update im Juli beinhaltete ein Upgrade des Dev-C++-Codes auf Delphi 10.4.
Warum sich Embarcadero für ein Update von Dev-C++ entschieden hat
Dev-C++ wurde erstmals im Jahr 1998 von Colin Laplace mit Bloodshed Software veröffentlicht. Ein neuer Fork, Orwell Dev-C++, wurde 2011 veröffentlicht, aber Updates wurden 2015 eingestellt.
Das Upgrade von Embarcadero migrierte Dev-C++ von Delphi 7 auf die neueste Version und führte auch eine neue, modernere Oberfläche ein. All diese Verbesserungen gaben Entwicklern weltweit die Aussicht auf eine schnellere und reibungslosere Windows-Entwicklung in C++ und Delphi.
Ein neues Whitepaper von Embarcadero MVP Eli M. mit dem Titel „Embarcadero Dev-C++: Erfolgreiche Modernisierung einer beliebten Windows C++ IDE“ zeichnet die Hintergründe und die Umsetzung des Modernisierungsprojekts vom ursprünglichen Plan bis zum neuen Release nach.
Planung und Durchführung des Upgrades
Für das Upgrade von Dev-C++ mussten eine Reihe von Faktoren berücksichtigt werden, beginnend mit der Frage, ob sich das Upgrade lohnen würde und ob Komponenten, Tools und Bibliotheken von Drittanbietern verfügbar sein würden oder ersetzt werden müssten. Die Fragen, wie empfänglich die Codebasis für das Upgrade sein würde und wie sehr das Projekt selbst von einem Upgrade profitieren würde, waren ebenfalls wichtig zu beantworten.
Das Upgrade auf Dev-C++ verlief in zwei Stufen. In der ersten Phase wurden so wenige Änderungen wie möglich vorgenommen, damit das Projekt in der neuesten Version von Delphi kompiliert werden konnte. Die zweite Stufe umfasste ergänzende Änderungen wie die Aktualisierung des Compilers, Unicode-Unterstützung und volle Unterstützung für Windows 10 mit Embarcadero Dev-C++ 6.0.
Wer war an dem Upgrade beteiligt?
Der Koordinator des Upgrade-Teams war ein Embarcadero MVP mit über 20 Jahren Erfahrung, während die übrigen Mitglieder aus den USA, der Ukraine, Mexiko und Neuseeland zum Projekt stießen. Zum Team gehörten auch ein Grafikdesigner für das neue Oberflächendesign und ein Qualitätssicherungsingenieur, der die Funktionalität des Upgrades überprüfte.
Wie empfindlich war die Codebasis für das Upgrade?
Das Upgrade-Team hat die Anpassungsfähigkeit der Dev-C++-Codebasis mit einem integrierten Delphi-Tool namens Method Toxicity Metrics gemessen. Dieses Tool ordnet jeder der gescannten Funktionen einen Toxizitätswert zu und stellte fest, dass die Dev-C++-Codebasis für das Upgrade empfänglich war.
Eine unvollständige Liste der Funktionen und ihrer Toxizitätswerte ist im Whitepaper „Embarcadero Dev-C++: Successfully Modernizing a Popular Windows C++ IDE“ veröffentlicht.
Waren Komponenten, Tools und Bibliotheken von Drittanbietern auf dem neuesten Stand?
Die wichtigsten Komponenten des Dev-C++-Upgrades waren SynEdit, das zentrale Syntax-Highlighting-Editor-Steuerelement, FastMM4, ein benutzerdefinierter Speichermanager, AStyle, ein Dienstprogramm zur Syntaxformatierung von C++-Code, und TDM-GCC 4.9.2, eine benutzerdefinierte Gruppe von Bibliotheken für die Windows-Entwicklung.
Im Whitepaper wird auch detailliert beschrieben, wie das Embarcadero-Team diese und andere Komponenten, Tools und Bibliotheken von Drittanbietern unter Modernisierungsgesichtspunkten evaluiert hat, um die Durchführbarkeit des Dev-C++-Upgrades abzuschätzen, und wie es das Setup der Drittanbieter zusammenstellte.
Was waren die Hauptvorteile des Upgrades für das Dev-C++ Projekt?
Die Unterstützung für VCL-Stile und High-DPI sind nur zwei der wichtigsten Verbesserungen, die Dev-C++ als Ergebnis des Modernisierungsprojekts erfahren hat. Dev-C++ ist nützlicher, leistungsfähiger und benutzerfreundlicher und kann mit den Entwicklern und der Entwicklung von C++ mithalten.
Несколько недель назад мы выпустили « патч для платформ Apple », направленный на улучшение нашей поддержки последних версий macOS и iOS. Хотя патч решает проблемы с импортом Apple SDK и отладкой на устройствах, все еще оставались некоторые проблемы с привязкой определенных библиотек (например, ClientDataSet) и для разработки на C ++ iOS.
Мы выпустили обновленную версию того же патча (через менеджер пакетов GetIt и вскоре через портал загрузки my.embarcadero.com ). Мы также удалили старый патч, так как новый патч включает и заменяет старый. Если вы уже установили первый патч, просто установите новый поверх него. Если нет, то все, что вам нужно, это установить «декабрьский патч». Обратите внимание, что страница приветствия должна уведомить вас о новом патче и что при открытии диспетчера пакетов GetIt вы должны увидеть оба исправления — это потому, что GetIt перечисляет все пакеты, уже установленные в вашей системе, независимо от их доступности, так как это будет единственный способ удалить такой пакет. Однако нет необходимости удалять старый патч.
Ниже представлена копия файла readme нового патча с более подробной информацией о старых и новых устраненных проблемах.
Декабрьский патч платформ Apple для RAD Studio 10.4.1 Readme
Этот патч устраняет несколько проблем, связанных с поддержкой RAD Studio 10.4.1 для XCode 12, iOS 14 и macOS Big Sur, которые не были доступны при поставке 10.4.1. Подобный патч мы выпустили в ноябре. Этот новый патч заменяет предыдущий и предлагает дальнейшие улучшения в той же области. Установка этого исправления поверх существующего исправления заменит все необходимые файлы.
В предыдущую версию этого патча (выпущенного в ноябре 2020 года) мы включили:
* Исправление RTL для проблемы с исключениями, также вызывающей проблемы при завершении работы приложения, для приложений macOS, работающих на недавно выпущенной macOS 11.0 Big Sur. Соответствующая публичная проблема была зарегистрирована на портале качества как RSP-30000. Для решения этой проблемы исправление включает измененный исходный код и скомпилированные двоичные файлы.
* Новая версия PAServer для macOS и включает исправления для нескольких проблем:
* Импорт SDK из Xcode 12
* Отладка приложений на устройстве iOS 14
* Сюда входят отчеты портала качества, такие как RSP-30806, RSP-31014, RSP-31667 и РСП-31049.
Эта версия патча включает вышеупомянутые исправления, но добавляет несколько новых исправлений:
* Проблема при компиляции с использованием компонента ClientDataSet (RSP-31795)
* Несколько проблем с использованием C ++ с iOS14 SDK:
* Ошибка ‘неизвестное имя типа __UINTPTR_TYPE__’ при сборке с iOS14
* Ошибки при создании с помощью iOS14 SDK, включая «недопустимый узел: это может быть результатом использования итератора карты в качестве итератора последовательности или наоборот» или ошибки компоновщика (RSP-31014)
* Ошибки компоновщика, относящиеся к DBX, такие как ‘[ld Error] Ошибка: «__ZdlPv», ссылка на которую имеется: __ZN9DBXObjectD0Ev в DBXCommon.o’
Algumas semanas atrás, lançamos um “ patch das plataformas Apple ” com foco em melhorar nosso suporte para as versões mais recentes do macOS e iOS. Embora o patch resolva problemas com a importação de SDKs da Apple e com a depuração em dispositivos, ainda havia alguns problemas com a vinculação de bibliotecas específicas (como a ClientDataSet) e para o desenvolvimento C ++ iOS.
Acabamos de lançar uma versão atualizada do mesmo patch (via gerenciador de pacotes GetIt e em breve via portal de download my.embarcadero.com ). Também retiramos o patch antigo, pois o novo patch inclui e substitui o antigo. Se você já instalou o primeiro patch, basta instalar este novo sobre ele. Caso contrário, tudo o que você precisa é instalar o “patch de dezembro”. Observe que a página de boas-vindas deve notificá-lo sobre o novo patch e que quando você abre o gerenciador de pacotes GetIt, você deve ver os dois patches – isto é porque GetIt lista qualquer pacote já instalado em seu sistema independentemente de sua disponibilidade, pois este será o único maneira de desinstalar esse pacote. Não há necessidade de desinstalar o patch antigo.
Abaixo está uma cópia do readme do novo patch, com informações mais detalhadas sobre os problemas antigos e novos abordados.
Atualização de plataformas Apple de dezembro para o Leiame do RAD Studio 10.4.1
Este patch resolve alguns problemas relacionados ao suporte do RAD Studio 10.4.1 para XCode 12, iOS 14 e macOS Big Sur, que não estavam disponíveis quando o 10.4.1 foi enviado. Emitimos um patch semelhante em novembro. Este novo patch substitui o patch anterior e oferece mais melhorias na mesma área. Instalar este patch sobre o patch existente substituirá todos os arquivos necessários.
Na versão anterior deste patch (lançado em novembro de 2020), incluímos:
* Uma correção RTL para um problema com exceções, que também causava problemas no encerramento do aplicativo, para aplicativos macOS em execução no macOS 11.0 Big Sur lançado recentemente. O problema público correspondente foi registrado no Portal da Qualidade como RSP-30000. Para esse problema, o patch inclui código-fonte modificado e arquivos binários compilados.
* Uma nova versão do PAServer para macOS e inclui correções para vários problemas:
* Importação do SDK do Xcode 12
* Aplicativos de depuração em um dispositivo iOS 14
* Inclui relatórios do Portal de qualidade, como RSP-30806, RSP-31014, RSP-31667 e RSP-31049.
Esta versão do patch inclui as correções acima, mas adiciona várias novas correções:
* Um problema de compilação com o componente ClientDataSet (RSP-31795)
* Vários problemas usando C ++ com o iOS14 SDK:
* Erro ‘nome de tipo desconhecido __UINTPTR_TYPE__’ ao compilar com iOS14
* Erros de construção com o iOS14 SDK, incluindo ‘nó inválido: isso pode resultar do uso de um iterador de mapa como um iterador de sequência, ou vice-versa’ ou um erro de vinculador (RSP-31014)
* Erros de vinculador referentes a DBX, como ‘[Erro ld] Erro: “__ZdlPv”, referenciado de: __ZN9DBXObjectD0Ev em DBXCommon.o’
Hace unas semanas, lanzamos un ” parche de plataformas de Apple ” centrado en mejorar nuestro soporte para las últimas versiones de macOS e iOS. Si bien el parche resuelve problemas con la importación de SDK de Apple y con la depuración en dispositivos, todavía había algunos problemas con el enlace de bibliotecas específicas (como ClientDataSet) y para el desarrollo de C ++ iOS.
Ahora hemos lanzado una versión actualizada del mismo parche (a través del administrador de paquetes GetIt y pronto a través del portal de descargas my.embarcadero.com ). También hemos retirado el parche anterior, ya que el nuevo parche incluye y reemplaza al anterior. Si ya ha instalado el primer parche, simplemente instale este nuevo encima. Si no es así, todo lo que necesita es instalar el “parche de diciembre”. Tenga en cuenta que la página de bienvenida debe notificarle sobre el nuevo parche y que cuando abra el administrador de paquetes GetIt debería ver ambos parches; esto se debe a que GetIt enumera cualquier paquete ya instalado en su sistema independientemente de su disponibilidad, ya que este será el único forma de desinstalar dicho paquete. Sin embargo, no es necesario desinstalar el parche anterior.
A continuación, se muestra una copia del archivo Léame del nuevo parche, con información más detallada sobre los problemas antiguos y nuevos abordados.
Parche de Apple Platforms de diciembre para RAD Studio 10.4.1 Readme
Este parche aborda algunos problemas relacionados con la compatibilidad de RAD Studio 10.4.1 para XCode 12, iOS 14 y macOS Big Sur, que no estaban disponibles cuando se envió 10.4.1. Publicamos un parche similar en noviembre. Este nuevo parche reemplaza al anterior y ofrece más mejoras en la misma área. La instalación de este parche sobre el parche existente reemplazará todos los archivos necesarios.
En la versión anterior de este parche (lanzado en noviembre de 2020), incluimos:
* Una solución RTL para un problema con excepciones, que también causa problemas al finalizar la aplicación, para aplicaciones macOS que se ejecutan en macOS 11.0 Big Sur recientemente lanzado. El problema público correspondiente se registró en Quality Portal como RSP-30000. Para este problema, el parche incluye código fuente modificado y archivos binarios compilados.
* Una nueva versión de PAServer para macOS, e incluye correcciones para varios problemas:
* Importación de SDK desde Xcode 12
* Aplicaciones de depuración en un dispositivo iOS 14
* Esto incluye informes de Quality Portal como RSP-30806, RSP-31014, RSP-31667 y RSP-31049.
Esta versión del parche incluye las correcciones anteriores, pero agrega varias correcciones nuevas:
* Un problema de compilación contra el componente ClientDataSet (RSP-31795)
* Varios problemas al usar C ++ con el SDK de iOS14:
* Error ‘nombre de tipo desconocido __UINTPTR_TYPE__’ al compilar con iOS14
* Errores de compilación con el SDK de iOS14, incluido ‘nodo no válido: esto puede resultar del uso de un iterador de mapa como iterador de secuencia, o viceversa’ o un error del vinculador (RSP-31014)
* Errores del vinculador que se refieren a DBX, como ‘[Error de ld] Error: “__ZdlPv”, referenciado desde: __ZN9DBXObjectD0Ev en DBXCommon.o’
Vor ein paar Wochen haben wir einen „Apple-Plattformen-Patch“ veröffentlicht, der sich auf die Verbesserung unserer Unterstützung für die neuesten Versionen von macOS und iOS konzentriert. Während der Patch Probleme mit dem Import von Apple SDKs und mit dem Debugging auf Geräten behebt, gab es immer noch einige Probleme mit dem Binden spezifischer Bibliotheken (wie der von ClientDataSet) und für die C++ iOS-Entwicklung.
Wir haben nun eine aktualisierte Version desselben Patches veröffentlicht (über den GetIt-Paketmanager und bald auch über das Download-Portal my.embarcadero.com). Wir haben auch den älteren Patch zurückgezogen, da der neue Patch den alten beinhaltet und ersetzt. Wenn Sie den ersten Patch bereits installiert haben, installieren Sie einfach diesen neuen darüber. Wenn nicht, brauchen Sie nur den „Dezember-Patch“ zu installieren. Beachten Sie, dass die Willkommensseite Sie auf den neuen Patch hinweisen sollte und dass Sie beim Öffnen des GetIt-Paketmanagers beide Patches sehen sollten — das liegt daran, dass GetIt jedes Paket auflistet, das bereits auf Ihrem System installiert ist, unabhängig von seiner Verfügbarkeit, da dies die einzige Möglichkeit ist, ein solches Paket zu deinstallieren. Es besteht jedoch keine Notwendigkeit, den alten Patch zu deinstallieren.
Unten finden Sie eine Kopie der Readme des neuen Patches, mit detaillierteren Informationen zu den alten und neuen Problemen, die behoben wurden.
Dezember Apple Plattformen Patch für RAD Studio 10.4.1 Readme
Dieser Patch behebt einige Probleme im Zusammenhang mit der Unterstützung von RAD Studio 10.4.1 für XCode 12, iOS 14 und macOS Big Sur, die bei der Auslieferung von 10.4.1 noch nicht verfügbar waren. Wir haben einen ähnlichen Patch im November veröffentlicht. Dieser neue Patch ersetzt den vorherigen Patch und bietet weitere Verbesserungen in demselben Bereich. Wenn Sie diesen Patch über den bestehenden Patch installieren, werden alle erforderlichen Dateien ersetzt.
In der vorherigen Version dieses Patches (veröffentlicht im November 2020) waren enthalten:
* Einen RTL-Fix für ein Problem mit Ausnahmen, das auch Probleme beim Beenden von Apps für macOS-Anwendungen verursacht, die auf dem kürzlich veröffentlichten macOS 11.0 Big Sur laufen. Das entsprechende öffentliche Problem wurde im Quality Portal als RSP-30000 protokolliert. Für dieses Problem enthält der Patch modifizierten Quellcode und kompilierte Binärdateien.
* Eine neue Version von PAServer für macOS, die Korrekturen für mehrere Probleme enthält:
* SDK-Import aus Xcode 12
* Debugging von Anwendungen auf einem iOS 14-Gerät
* Dies beinhaltet Quality Portal Berichte wie RSP-30806, RSP-31014, RSP-31667 und RSP-31049.
Diese Version des Patches enthält die oben genannten Korrekturen, fügt aber mehrere neue Korrekturen hinzu:
* Ein Problem bei der Kompilierung mit der ClientDataSet-Komponente (RSP-31795)
* Mehrere Probleme bei der Verwendung von C++ mit dem iOS14 SDK:
* Fehler ‚unbekannter Typname __UINTPTR_TYPE__‘ beim Bauen mit iOS14
* Fehler bei der Erstellung mit dem iOS14 SDK, einschließlich ‚ungültiger Knoten: dies kann aus der Verwendung eines Map-Iterators als Sequenz-Iterator oder umgekehrt resultieren‘ oder ein Linker-Fehler (RSP-31014)
* Linker-Fehler, die sich auf DBX beziehen, wie z.B. ‚[ld Error] Error: „__ZdlPv“, referenziert von: __ZN9DBXObjectD0Ev in DBXCommon.o‘
TwineCompile — это надстройка для C ++ Builder, которая в настоящее время находится в GetIt бесплатно, чтобы обновлять подписчиков для всех выпусков, включая Pro. Посмотрим, насколько хорошо это работает с реальными проектами.
TwineCompile значительно ускоряет время компиляции C ++ Builder. Чтобы проверить это, мы получили две большие библиотеки C ++ с открытым исходным кодом, которые собираются с помощью C ++ Builder: Xerces и SDL 2 . Xerces — это библиотека XML, а SDL — очень известная библиотека окон и ввода с открытым исходным кодом, часто используемая для игр. Они оба являются отличными тестовыми примерами, поскольку представляют собой большие кодовые базы C ++ приличного размера.
В среде IDE, без TwineCompile, сборка Xerces заняла 5 минут 19 секунд. С TwineCompile Xerces занял 51 секунду. Это 6-кратное ускорение
В среде IDE без TwineCompile для сборки SDL 2 потребовалось 2 минуты 10 секунд. С TwineCompile SDL 2 занял 21 секунду. Это ускорение в 6,2 раза
Довольно хорошие результаты! Представьте себе, что время сборки вашего собственного проекта C ++ увеличилось на столько — в шесть раз. Это примерно соответствует количеству ядер ЦП.
Детали:
Эти результаты были получены на более старом процессоре Intel i7-3930K (6 ядер с гиперпоточностью) с 16 ГБ ОЗУ на SSD-диске Samsung XP941 M.2 PCIe.
Прирост производительности в основном связан с количеством доступных ядер. Мы ожидаем, что в чистой сборке вы увидите примерно 4-кратное увеличение с 4-ядерным процессором или 12-кратное ускорение с 12-ядерным процессором. Однако TwineCompile также реализует кэширование и предлагает другие методы, которые помогают в других сценариях (например, создание, а не чистая сборка или машины с более ограниченными ресурсами), так что Make может быть еще быстрее. Иногда мы упоминаем 50-кратное ускорение, которое может быть достигнуто в некоторых сценариях — очень впечатляюще.
Вы можете найти TwineCompile в GetIt сегодня в разделе «Плагины IDE». Это бесплатно для Professional, а также для Architect / Enterprise, если у вас есть активная подписка на обновления (это новинка, она не всегда была доступна для Pro, но теперь доступна).
Рекомендую скачать! Мы очень хотим узнать, как это влияет на ваш проект.
TwineCompile é um addon para C ++ Builder que está atualmente no GetIt gratuitamente para atualizar clientes assinantes, para todas as edições, incluindo Pro. Vamos ver como funciona bem com projetos do mundo real.
TwineCompile acelera significativamente o tempo de compilação do C ++ Builder. Para testar isso, temos duas grandes bibliotecas C ++ de código aberto que são compiladas com o C ++ Builder: Xerces e SDL 2 . Xerces é uma biblioteca XML e SDL uma biblioteca de janelas e entradas de código aberto muito conhecida, frequentemente usada para jogos. Ambos são excelentes casos de teste, uma vez que têm bases de código C ++ grandes e decentes.
No IDE, sem TwineCompile, o Xerces levou 5 minutos e 19 segundos para ser construído. Com TwineCompile, o Xerces levou 51 segundos. Isso é um aumento de 6x
In-IDE, sem TwineCompile, SDL 2 levou 2 minutos e 10 segundos para construir. Com TwineCompile, SDL 2 levou 21 segundos. Isso é um aumento de 6,2x
Resultados muito bons! Imagine acelerar o tempo de construção do seu próprio projeto C ++ em muito – seis vezes mais rápido. Isso aumenta aproximadamente com o número de núcleos da CPU.
Detalhes:
Esses resultados foram gerados em um processador Intel i7-3930K mais antigo (6 núcleos com hyperthreading) com 16 GB de RAM, em um disco SSD Samsung XP941 M.2 PCIe.
O ganho de desempenho se deve principalmente ao número de núcleos disponíveis. Esperamos que você veja, em uma construção limpa, aproximadamente um ganho de 4x com um CPU de 4 núcleos, ou um aumento de velocidade de 12x com um CPU de 12 núcleos. No entanto, TwineCompile também implementa cache e tem outras técnicas que ajudam em outros cenários (ou seja, make, não uma compilação limpa ou máquinas com mais recursos limitados) para que um Make possa ser ainda mais rápido. Há um aumento de 50x que às vezes citamos, que pode ser alcançado em alguns cenários – muito impressionante.
Você pode encontrar TwineCompile no GetIt hoje, na seção ‘Plugins IDE’. É gratuito para o Professional, assim como para o Architect / Enterprise, se você tiver uma assinatura de atualização ativa (isso é novo, nem sempre estava disponível para o Pro, mas agora está).
Eu recomendo que você baixe! Estamos ansiosos para saber a diferença que isso faz no seu projeto.
TwineCompile es un complemento para C ++ Builder que se encuentra actualmente en GetIt de forma gratuita para actualizar a los clientes de suscripción, para todas las ediciones, incluida la Pro. Veamos qué tan bien funciona con proyectos del mundo real.
TwineCompile acelera significativamente el tiempo de compilación de C ++ Builder. Para probar esto, obtuvimos dos grandes bibliotecas de C ++ de código abierto que se compilan con C ++ Builder: Xerces y SDL 2 . Xerces es una biblioteca XML y SDL una biblioteca de entrada y ventanas de código abierto muy conocida, que se utiliza a menudo para juegos. Ambos son excelentes casos de prueba, ya que son bases de código C ++ grandes de tamaño decente.
En IDE, sin TwineCompile, Xerces tardó 5 minutos y 19 segundos en compilarse. Con TwineCompile, Xerces tardó 51 segundos. Eso es una aceleración 6x
En IDE, sin TwineCompile, SDL 2 tardó 2 minutos y 10 segundos en compilarse. Con TwineCompile, SDL 2 tardó 21 segundos. Eso es una aceleración de 6.2x
¡Muy buenos resultados! Imagínese acelerar los tiempos de construcción de su propio proyecto C ++ en esa cantidad, seis veces más rápido. Esto escala aproximadamente con la cantidad de núcleos de CPU.
Detalles:
Estos resultados se generaron en un procesador Intel i7-3930K más antiguo (6 núcleos con hyperthreading) con 16 GB de RAM, en un disco SSD Samsung XP941 M.2 PCIe.
La ganancia de rendimiento se debe principalmente a la cantidad de núcleos disponibles. Esperamos que vea, en una construcción limpia, aproximadamente una ganancia de 4x con una CPU de 4 núcleos, o una aceleración de 12x con una CPU de 12 núcleos. Sin embargo, TwineCompile también implementa el almacenamiento en caché y tiene otras técnicas que ayudan en otros escenarios (es decir, hacer, no una construcción limpia, o más máquinas con recursos limitados) para que Make pueda ser aún más rápido. Hay una aceleración de 50x que a veces citamos que se puede lograr en algunos escenarios, muy impresionante.
Puede encontrar TwineCompile en GetIt hoy, en la sección ‘Complementos IDE’. Es gratis para Professional y Architect / Enterprise si tiene una suscripción de actualización activa (esto es nuevo, no siempre estuvo disponible para Pro, pero ahora lo está).
¡Te recomiendo que descargues! Estamos ansiosos por escuchar la diferencia que hace en su proyecto.
TwineCompile ist ein Addon für C++Builder, das derzeit in GetIt für Update-Abonnement-Kunden kostenlos zur Verfügung steht, für alle Editionen, einschließlich Pro. Schauen wir uns an, wie gut es mit realen Projekten funktioniert.
TwineCompile beschleunigt die Kompilierzeit von C++Builder erheblich. Um dies zu testen, haben wir zwei große Open-Source-C++-Bibliotheken, die sich mit C++Builder kompilieren lassen: Xerces und SDL 2. Xerces ist eine XML-Bibliothek, und SDL eine sehr bekannte Open-Source-Fenster- und Eingabebibliothek, die oft für Spiele verwendet wird. Beide sind großartige Testfälle, da es sich um große C++-Codebasen von annehmbarer Größe handelt.
In-IDE, ohne TwineCompile, benötigte Xerces 5 Minuten und 19 Sekunden zum Erstellen. Mit TwineCompile benötigte Xerces 51 Sekunden. Das ist eine 6-fache Beschleunigung
In-IDE, ohne TwineCompile, benötigte SDL 2 2 Minuten und 10 Sekunden für die Erstellung. Mit TwineCompile benötigte SDL 2 21 Sekunden. Das ist eine 6,2-fache Beschleunigung.
Ziemlich gute Ergebnisse! Stellen Sie sich vor, Sie könnten die Erstellungszeit Ihres eigenen C++-Projekts um so viel beschleunigen – sechsmal so schnell. Dies skaliert in etwa mit der Anzahl der CPU-Kerne.
Details:
Diese Ergebnisse wurden auf einem älteren Intel i7-3930K-Prozessor (6 Kerne mit Hyperthreading) mit 16 GB RAM auf einer Samsung XP941 M.2 PCIe-SSD-Festplatte erzeugt.
Der Leistungsgewinn ist hauptsächlich auf die Anzahl der verfügbaren Kerne zurückzuführen. Wir würden erwarten, dass Sie bei einem sauberen Build ungefähr einen 4-fachen Gewinn mit einer 4-Kern-CPU oder einen 12-fachen Speedup mit einer 12-Kern-CPU sehen werden. TwineCompile implementiert jedoch auch Caching und verfügt über andere Techniken, die in anderen Szenarien helfen (d.h. Make, kein sauberer Build, oder ressourcenbeschränktere Maschinen), so dass ein Make noch schneller sein kann. Es wird manchmal ein 50-facher Speedup genannt, der in einigen Szenarien erreicht werden kann – sehr beeindruckend.
Sie können TwineCompile heute in GetIt finden, im Abschnitt ‚IDE Plugins‘. Es ist sowohl für Professional als auch für Architect/Enterprise kostenlos, wenn Sie ein aktives Update-Abonnement haben (das ist neu, es war nicht immer für Pro verfügbar, aber jetzt schon).
Ich empfehle Ihnen den Download! Wir sind gespannt, welchen Unterschied es für Ihr Projekt macht.
Расширенная разметка редактора кода, пользовательский интерфейс стека вызовов и инструменты многопоточной отладки — в новом подключаемом модуле для RAD Studio теперь доступны бесплатно.
Мы очень рады объявить о доступном сегодня новом расширении IDE для RAD Studio 10.4.1, созданном тем же автором, что и популярные плагины для закладок и навигатора, Parnassus. Если вы читаете это, он уже доступен в GetIt для всех, у кого есть подписка на обновления: просто откройте GetIt, перейдите в раздел «Плагины IDE» и установите «Параллельный отладчик». Для этого требуется Delphi или C ++ Builder 10.4.1 или новее.
Что это такое?
Инструмент для отладки многопоточных приложений
… А также помогает отлаживать традиционные однопоточные приложения!
Что-то для каждого. Читайте дальше!
Некоторая новая разметка редактора, полезная для вас, даже если вы не используете несколько потоков в своем приложении!
Проблема
Если в вашем приложении более одного потока, вы, вероятно, захотите отладить взаимодействие потоков. Традиционное представление отлаженного приложения в среде IDE, хотя в нем может быть указано несколько потоков в представлении потоков, заключается в том, чтобы рассматривать приложение так, как будто оно имеет только один поток: вы увидите только один стек вызовов и элементы управления запуском / паузой / шагом предназначены для всего процесса. Это оставляет вас, разработчика, который должен отлаживать ваше приложение, с такими вопросами, как:
Как узнать, работают ли одновременно несколько потоков в одном коде?
Как мне узнать, что делают все мои потоки одновременно?
Как выполнить метод всего в одном потоке?
То есть, как мне отлаживать один поток без запуска других потоков и выполнения кода, который я не хочу, чтобы они не попадали в точки останова и т. Д.?
Сколько ЦП использует каждый из моих потоков? Они эффективны?
И, возможно, другие вопросы или пожелания, даже отладка только в одном потоке, например:
Я бы хотел, чтобы все методы в стеке вызовов были выделены в моем коде легко.
Хотел бы я легко сделать точку останова применимой только к определенному потоку
Я бы хотел, чтобы разметка редактора, показывающая текущую строку кода, выглядела более очевидной.
Я хочу, чтобы при приостановке процесса он не выводил меня в представление CPU, а показывал мне только мой собственный источник
На каждый из этих вопросов отвечает этот новый плагин.
Давайте пробежимся по его особенностям. В следующем разделе вы прочитаете о представлении Parallel Threads и стеках параллельных вызовов; поточный запуск и пошаговое выполнение; вид процесса; новая разметка редактора, включая потоки и стеки вызовов в редакторе; установка сродства потока точки останова; перемещение текущего исполнения; новое главное меню темы; и больше…
Представление «Параллельные потоки»
Просмотр> Окна отладки> Параллельные потоки
В этом окне перечислены все потоки в процессе, перечисленные по горизонтали. Когда ваше приложение работает, если оно в Windows, у каждого потока есть диаграмма, отображающая использование ЦП.
Когда приложение приостановлено, каждый поток отображает свой стек вызовов.
Каждому потоку назначается уникальный цвет , начиная со среднего синего для основного потока приложения. Этот цвет используется в качестве визуального ориентира для повсеместной идентификации нити.
Отображаются названия потоков . Даже если вы не называете свой основной поток, Parallel Debugger достаточно умен, чтобы его обнаружить.
Текущая резьба выделена полужирным шрифтом и имеет тонкую рамку своего цвета вокруг нее. Дважды щелкните заголовок обсуждения (его имя), чтобы сделать его текущим.
Записи стека вызовов без исходного кода, т. Е. Не подлежащие отладке без использования представления ЦП, по умолчанию свернуты. Вы можете расширить их (или отключить, чтобы отобразить традиционный стек вызовов).
При приостановке процесса отладчик всегда будет показывать верхнюю запись стека отлаживаемых вызовов. То есть он может не отображать верхнюю запись в стеке вызовов, как это обычно делает IDE, но он будет отображать ваш исходный код, который вызывает его. Идея здесь в том, чтобы отлаживать то, что вы контролируете, — всегда показывать источник.
Крайняя левая кнопка на каждой панели инструментов потока приостанавливает весь процесс, делая этот поток текущим потоком. Это «приостановить процесс в этом потоке». Крайняя правая кнопка позволяет вам изменять порядок отображения потоков, закрепляя потоки слева или рядом с другим закрепленным потоком. Это полезно, когда в вашем приложении много потоков, и вы хотите, чтобы те, которые вам интересны, были сгруппированы вместе. Если закрепленный поток имеет имя, закрепление сохраняется при перезапуске процесса: когда вы завершаете и перезапускаете приложение, те же потоки будут закреплены.
Запуск или переход по одному потоку
Остальные кнопки панели инструментов предназначены для управления ходом резьбы.
Стандартные средства управления запуском, переходом, переходом и т. Д. В среде IDE являются уровнями процесса; то есть они будут запускать весь процесс, пробуждая все потоки, и вы просто надеетесь, что больше ничего не произойдет, пока операция перехода не будет завершена в потоке, который вы просматриваете. На практике может многое — исключения, точки останова и т. Д. — плюс, конечно, иногда вы просто хотите убедиться, что другие потоки не запускаются при отладке одного потока.
Эти кнопки панели инструментов позволяют запускать и переходить на уровень потока. Вы можете:
Запустить только этот поток, оставив все остальные потоки приостановленными
Оставить этот поток приостановленным, но запустить все остальные потоки
Войдите в метод, только в этой ветке
Переступайте через строку кода только в этом потоке. Это позволяет запускать и шагать только этот поток; никакие другие потоки не могут работать одновременно
Запускать до возврата из метода, только для этого потока
Чтобы использовать их, убедитесь, что интересующий вас поток является текущим, дважды щелкнув его заголовок или имя. Вы увидите, что название выделено жирным шрифтом.
У каждого из них есть сочетание клавиш, то есть вы можете выполнять действия (и т. Д.) С помощью клавиатуры, а не просто нажимая кнопку с помощью мыши. Ярлыки отображаются в меню «Тема», в котором есть пункты меню для текущей цепочки (см. Ниже).
Управление запуском потока — одна из самых мощных функций Parallel Debugger.
Интеграция редактора
Одна из важных задач при понимании того, что происходит в многопоточном приложении, — это знать, когда несколько потоков выполняются в одной и той же области кода. Parallel Debugger делает это понятным, добавляя разметку в редакторе для полного стека вызовов каждого потока. Они отображаются с помощью «тегов», небольших цветных маркеров в правой части редактора.
Верхняя запись стека вызовов — то есть там, где поток выполняется «сейчас» — помечена сплошным тегом с использованием цвета потока. Другие записи стека вызовов для того же потока имеют бледную версию цвета потока (отмечая, что поток все еще отмечен сплошным кружком в теге).
Это позволяет вам быстро читать код и знать, что «поток X выполняется где-то внутри этой строки кода» и «поток Y и поток Z сейчас находятся в одном методе». Вы даже увидите, где именно находятся нити. На этом снимке экрана текущий поток — синий, а второй поток (светло-красный) выполняется со своей текущей точкой выполнения внутри IsPrime (), но вызов IsPrime выделил строку над точкой выполнения текущего потока.
Перемещение исполнения
Перед установкой плагина в среде IDE точка выполнения текущего потока отображалась маленькой синей стрелкой. Теперь он заменен большим шевроном в левой части редактора кода.
Вы можете изменить положение точки выполнения — где поток начнет выполняться в следующий раз, когда вы нажмете «Выполнить» или «Шаг» — просто щелкнув и перетащив этот маркер.
Сходство потока точки останова
По умолчанию точки останова применяются ко всем потокам. Перед установкой плагина точки останова были нарисованы с помощью красной точки, но с потоками Parallel Debugger выделены цвета, а красный цвет означает красную ветку. Точки останова, которые применяются ко всем потокам, теперь отображаются в виде разноцветного колеса.
Чтобы точка останова применялась только к определенному потоку, щелкните точку останова правой кнопкой мыши. Новое меню точки останова позволяет вам выбрать поток, к которому будет применяться точка останова.
Здесь эта точка останова применяется только к зеленому потоку.
Обзор процесса
Просмотр> Отладка Windows> Процесс
В этом окне отображается информация о процессе в целом. Он отображает использование ЦП на уровне процесса (снова разделенное на ядро и пользовательский режим), тип процесса (например, Wow64), а также кнопки запуска / паузы / сброса и т. Д. Это уровень процесса, то есть они обеспечивают ту же функциональность, что и собственная панель инструментов запуска среды IDE.
Вы также можете просмотреть список потоков, нажав кнопку внизу, как быстрый способ выбрать текущий поток (поскольку горизонтальная прокрутка в представлении потоков может занять больше времени, если у вас много потоков).
Главное меню темы
В среде IDE теперь есть меню «Поток», расположенное справа от меню «Выполнить». Это предоставляет меню для большинства прямых операций, которые вы можете выполнять с потоком. Он позволяет вам установить интересующий поток (то есть текущий поток, если процесс приостановлен, или поток, который вы хотите стать текущим потоком в следующий раз, когда вы приостановите), а для текущего потока есть пункты меню для управления запуском потока. Вы можете видеть ярлыки в этих пунктах меню.
Он также перечисляет каждый поток в приложении, и для каждого из них показано, что вы выполняете управление плюс закрепление, фактически те же функции, что и в представлении Parallel Threads.
Уровень функций
Наконец, самый верхний пункт меню контролирует уровень функций Parallel Debugger: что он делает, когда ваше приложение работает. Самый низкий уровень предназначен только для отслеживания использования ЦП: используйте его, если вы хотите, чтобы параллельный отладчик был установлен, но в настоящее время вы не хотите активно использовать его для этого приложения. Следующие два уровня контролируют, насколько глубоко отладчик отслеживает стеки вызовов потоков.
Используйте это, если у вас есть десятки или сотни потоков. В этой ситуации вас, вероятно, интересует только подмножество потоков. Установите уровень функции на «Только выбранные стеки вызовов», и параллельный отладчик будет отслеживать стеки вызовов для основного потока, закрепленных потоков и текущего потока только по умолчанию. Вы всегда можете получить стек вызовов для любого потока, нажав кнопку, отображаемую над областью стека вызовов в представлении потоков.
Поддерживаемые платформы
Параллельный отладчик имеет полную функциональность при локальной отладке приложений в Windows.
На других платформах или при удаленной отладке функциональность зависит от того, что поддерживает отладчик. Использование ЦП поддерживается только для локальных (без удаленной отладки) приложений Windows. Пошаговое выполнение или выполнение отдельных потоков будет работать только на платформах, поддерживающих замораживание потоков. Существует известная проблема для C ++ Win64, когда стеки вызовов не могут быть оценены: это будет исправлено в следующей версии C ++ Builder.
Следуя традиции Парнаса, начатой с закладок, параллельный отладчик действительно решает ошибку! ( RSP-29768 .)
В общем: если вы используете Windows, отладчик имеет полную функциональность.
Плагин поддерживает RAD Studio 10.4.1 (и новее, когда выйдет 10.4.2.)
Получение параллельного отладчика
Параллельный отладчик теперь в GetIt!
Благодаря Embarcadero Parnassus делает отладчик бесплатным для всех клиентов RAD Studio с активной подпиской на обновления. Откройте GetIt, перейдите в категорию подключаемых модулей IDE и нажмите «Установить».
И Parnassus, и Embarcadero надеются, что вы найдете это расширение отличным дополнением к вашей IDE.
От себя лично я хотел бы поблагодарить Embarcadero за интерес к плагину и желание добавить его в GetIt, а также всех моих бета-тестеров, которые с августа использовали различные версии этого плагина, качество которых постепенно улучшалось. Всем большое спасибо!
Marcação de editor de código avançado, interface de usuário de pilha de chamadas e ferramentas de depuração multithread – em um novo plugin para RAD Studio agora disponível gratuitamente.
Temos o prazer de anunciar uma nova extensão IDE disponível hoje para RAD Studio 10.4.1, criada pelo mesmo autor que os populares plug-ins Bookmarks e Navigator, Parnassus. Se você está lendo isso, ele já está disponível no GetIt para qualquer pessoa com assinatura de atualização: basta abrir o GetIt, ir para a seção Plug-ins do IDE e instalar o ‘Depurador Paralelo’. Requer Delphi ou C ++ Builder 10.4.1 ou mais recente.
O que é isso?
Uma ferramenta para depurar aplicativos multithread
… E também ajuda a depurar aplicativos tradicionais de thread único!
Algo para todos. Leia mais!
Algumas das novas marcações do editor, úteis para você mesmo se você não usar vários tópicos em seu aplicativo!
O problema
Se você tiver mais de um encadeamento em seu aplicativo, provavelmente deseja depurar a interação do encadeamento. Uma visão do IDE tradicional de um aplicativo depurado, embora possa listar vários threads em uma visão Threads, é tratar o aplicativo como se tivesse apenas um único thread: você verá apenas uma pilha de chamadas e os controles de execução / pausa / etapa são para todo o processo. Isso deixa você, um desenvolvedor que precisa depurar seu aplicativo, com perguntas como:
Como posso ver se vários threads estão em execução no mesmo código ao mesmo tempo?
Como posso ver o que todos os meus tópicos estão fazendo ao mesmo tempo?
Como passo por um método em apenas um thread?
Ou seja, como faço para depurar um thread sem outros threads em execução e execução de código que eu não quero, atingindo pontos de interrupção, etc?
Quanta CPU está usando cada um dos meus threads? Eles são eficientes?
E talvez outras dúvidas ou desejos, até mesmo depurando apenas em um único thread, como:
Eu gostaria que fosse fácil ver todos os métodos na pilha de chamadas destacados em meu código
Eu gostaria de poder facilmente fazer um ponto de interrupção se aplicar apenas a um determinado tópico
Eu gostaria que a marcação do editor mostrando a linha de código atual parecesse um pouco mais óbvia
Eu gostaria que, ao pausar o processo, ele não me levasse para a visualização da CPU, mas me mostrasse apenas minha própria fonte
Cada uma delas é respondida por este novo plugin.
Vamos examinar seus recursos. Na próxima seção, você lerá sobre a exibição Parallel Threads e as pilhas de chamadas paralelas; execução e revisão por thread; a visualização do processo; nova marcação do editor, incluindo threads e pilhas de chamadas no editor; definir a afinidade de thread de um ponto de interrupção; mover a execução atual; o novo menu principal do Thread; e mais…
A visão Parallel Threads
Exibir> Janelas de depuração> Threads paralelos
Esta janela lista todos os threads no processo, listados horizontalmente. Quando seu aplicativo está em execução, se estiver no Windows, cada thread tem um gráfico exibindo o uso da CPU.
Quando o aplicativo é pausado, cada thread exibe sua pilha de chamadas.
Cada thread recebe uma cor exclusiva , começando com um azul médio para o thread principal do aplicativo. Esta cor é usada como um guia visual para identificar o fio em todos os lugares.
Os nomes dos threads são exibidos. Mesmo se você não nomear seu thread principal, o Parallel Debugger é inteligente o suficiente para detectá-lo.
O segmento atual está em negrito e tem uma borda fina de sua cor ao redor. Clique duas vezes no título de um tópico (seu nome) para torná-lo o tópico atual.
As entradas da pilha de chamadas sem código-fonte – ou seja, não depuráveis sem usar a visualização da CPU – são reduzidas por padrão. Você pode expandi-los (ou desligá-los para mostrar uma pilha de chamadas tradicional).
Ao pausar o processo, o depurador sempre mostrará a entrada principal da pilha de chamadas depurável. Ou seja, pode não mostrar a entrada superior na pilha de chamadas da maneira que o IDE tradicionalmente faz, mas mostrará o código-fonte seu que faz a chamada. A ideia aqui é depurar o que você tem controle – sempre mostre a fonte.
O botão mais à esquerda em cada barra de ferramentas de encadeamento pausa todo o processo, tornando esse encadeamento o encadeamento atual. É ‘pausar o processo neste tópico’. O botão mais à direita permite alterar a ordem de exibição dos tópicos, fixando os tópicos à esquerda ou ao lado de outro tópico fixado. Isso é útil quando você tem muitos threads em seu aplicativo e deseja manter aqueles de seu interesse agrupados. Se um encadeamento fixado tiver um nome, a fixação é persistente nas reinicializações do processo: quando você encerra e reinicia seu aplicativo, os mesmos encadeamentos são fixados.
Executando ou escalando um único thread
Os botões restantes da barra de ferramentas são para controle de execução de thread.
Os controles normais de execução, passagem, entrada, etc. do IDE são de nível de processo; isto é, eles executarão todo o processo, ativando todos os threads, e você apenas espera que nada mais aconteça até que a operação passo a passo seja concluída no thread que você está examinando. Na prática, muitos podem – exceções, pontos de interrupção, etc. – mais, é claro, às vezes você só quer ter certeza de que outros encadeamentos não sejam executados durante a depuração de um único encadeamento.
Esses botões da barra de ferramentas permitem que você execute e pise em um nível por thread. Você pode:
Execute apenas este tópico, mantendo todos os outros tópicos em pausa
Mantenha este tópico em pausa, mas execute todos os outros tópicos
Entre em um método, apenas neste tópico
Passe por uma linha de código, apenas neste tópico. Isso permite que apenas este segmento seja executado e executado; nenhum outro tópico pode ser executado ao mesmo tempo
Executar até o retorno do método, apenas para este segmento
Para usá-los, certifique-se de que o tópico no qual você está interessado seja o tópico atual, clicando duas vezes em seu título ou nome. Você verá que desenha o título em negrito.
Cada um deles possui um atalho de teclado, o que significa que você pode acessar (etc) através do teclado, não apenas clicando em um botão com o mouse. Os atalhos são visíveis no menu Thread, que possui itens de menu para o thread atual (veja abaixo).
O controle de execução por thread é um dos recursos mais poderosos do Parallel Debugger.
Integração do Editor
Uma tarefa importante para entender o que está acontecendo em um aplicativo multithread é saber quando vários threads estão sendo executados na mesma área do código. O Parallel Debugger deixa isso claro ao adicionar marcação no editor para cada pilha de chamadas completa do thread. Eles são mostrados por meio de ‘tags’, pequenos marcadores coloridos no lado direito do editor.
A entrada da pilha de chamadas superior – ou seja, onde o encadeamento está sendo executado ‘agora’ – é marcada com uma tag de cor sólida, usando a cor do encadeamento. Outras entradas da pilha de chamadas para o mesmo tópico estão em uma versão desbotada da cor do tópico (observe que o tópico ainda está marcado com um círculo sólido na tag).
Isso permite que você leia rapidamente seu código e saiba, ‘Thread X está sendo executado em algum lugar dentro desta linha de código’ e ‘Thread Y e Thread Z estão ambos no mesmo método agora’. Você verá até onde os fios estão exatamente. Nesta captura de tela, o encadeamento atual é o azul e um segundo encadeamento (vermelho claro) está sendo executado com seu ponto de execução atual dentro de IsPrime (), mas a chamada para IsPrime destacou a linha acima do ponto de execução do encadeamento atual.
Execução Móvel
Antes de instalar o plugin, o IDE costumava exibir o ponto de execução do thread atual com uma pequena seta azul. Isso agora foi substituído por um grande chevron no lado esquerdo do editor de código.
Você pode alterar onde está o ponto de execução – onde o thread começará a ser executado na próxima vez que você clicar em Executar ou Etapa – simplesmente clicando e arrastando este marcador.
Afinidade de Thread de Ponto de Interrupção
Por padrão, os pontos de interrupção se aplicam a todos os threads. Antes de instalar o plug-in, os pontos de interrupção foram desenhados com um ponto vermelho, mas com o Parallel Debugger os threads recebem cores e vermelho significa o thread vermelho. Os pontos de interrupção que se aplicam a todos os threads agora são desenhados como uma roda multicolorida.
Para fazer um ponto de interrupção se aplicar a apenas um segmento específico, clique com o botão direito do mouse no ponto de interrupção. O novo menu Breakpoint permite que você escolha um thread ao qual o breakpoint será aplicado.
Aqui, esse ponto de interrupção se aplica apenas ao segmento verde.
A visão do processo
Exibir> Janelas de depuração> Processo
Esta janela mostra informações sobre o processo como um todo. Ele exibe o uso da CPU no nível do processo (novamente dividido em kernel e modo do usuário), o tipo de processo (por exemplo, Wow64) e tem botões executar / pausar / redefinir etc. Eles são de nível de processo, ou seja, fornecem a mesma funcionalidade que a barra de ferramentas de execução do próprio IDE.
Você também pode ver uma lista de tópicos clicando no botão na parte inferior, como uma maneira rápida de escolher o tópico atual (já que rolar horizontalmente na visualização Tópicos pode demorar mais se você tiver muitos tópicos.)
O menu principal do Tópico
O IDE agora tem um menu Thread, localizado à direita do menu Run. Isso fornece um menu para a maioria das operações diretas que você pode realizar para um thread. Ele permite que você defina o thread de interesse (ou seja, o thread atual se o processo estiver pausado, ou o thread que você deseja que se torne o thread atual na próxima pausa), e para o thread atual tem itens de menu para controle de execução do thread. Você pode ver os atalhos nesses itens de menu.
Ele também lista cada thread no aplicativo, e para cada um mostra que você executa o controle e a fixação, efetivamente os mesmos recursos da visualização Threads paralelos.
Nível de recurso
Por fim, o item de menu superior controla o nível de recurso do Parallel Debugger: o que ele faz quando seu aplicativo está em execução. O nível mais baixo é apenas para rastrear o uso da CPU: use-o se quiser que o depurador paralelo seja instalado, mas não deseja usá-lo ativamente para este aplicativo atualmente. Os próximos dois níveis controlam a profundidade com que o depurador rastreia as pilhas de chamadas de thread.
Use isso se você tiver dezenas ou centenas de tópicos. Nessa situação, você provavelmente está interessado apenas em um subconjunto de threads. Defina o nível de recurso como ‘Somente Pilhas de Chamadas Selecionadas’, e o depurador paralelo rastreará as pilhas de chamadas para o encadeamento principal, encadeamentos fixados e o encadeamento atual apenas por padrão. Você sempre pode obter a pilha de chamadas de qualquer thread clicando em um botão exibido na área da pilha de chamadas na visualização Threads.
Plataformas Suportadas
O Parallel Debugger tem funcionalidade total ao depurar localmente aplicativos no Windows.
Em outras plataformas, ou depuração remota, a funcionalidade depende do que o depurador suporta. O uso da CPU é compatível apenas com aplicativos locais do Windows (depuração não remota). A revisão ou execução por thread só funcionará em plataformas que suportam o congelamento de thread. Há um problema conhecido para C ++ Win64 em que as pilhas de chamadas não podem ser avaliadas: isso será corrigido em uma próxima versão do C ++ Builder.
Seguindo uma tradição do Parnassus iniciada com os Bookmarks, o Parallel Debugger resolve um bug! ( RSP-29768 .)
Em geral: se você estiver usando o Windows, o depurador tem funcionalidade total.
O plug-in suporta RAD Studio 10.4.1 (e mais recente, quando o 10.4.2 for lançado).
Obtendo o depurador paralelo
O depurador paralelo já está no GetIt!
Graças à Embarcadero, a Parnassus está disponibilizando o depurador gratuitamente para qualquer cliente RAD Studio com assinatura de atualização ativa. Abra GetIt, vá para a categoria Plug-ins IDE e clique em Instalar.
Tanto o Parnassus quanto o Embarcadero esperam que você considere esta extensão um ótimo acréscimo ao seu IDE.
A título pessoal, gostaria de agradecer à Embarcadero por se interessar pelo plugin e querer adicioná-lo ao GetIt, e a todos os meus testadores beta que usaram várias versões deste plugin de qualidade crescente desde agosto. Muito obrigado a todos!
Marcado de editor de código avanzado, IU de pila de llamadas y herramientas de depuración multiproceso: en un nuevo complemento para RAD Studio ahora disponible gratuitamente.
Nos complace anunciar una nueva extensión IDE disponible hoy para RAD Studio 10.4.1, creada por el mismo autor que los populares complementos de marcadores y navegador, Parnassus. Si está leyendo esto, ya está disponible en GetIt para cualquier persona con suscripción de actualización: simplemente abra GetIt, vaya a la sección Complementos IDE e instale el ‘Depurador paralelo’. Requiere Delphi o C ++ Builder 10.4.1 o más reciente.
¿Qué es?
Una herramienta para depurar aplicaciones multiproceso
… ¡Y también ayuda a depurar las aplicaciones tradicionales de un solo hilo!
Algo para todos. ¡Sigue leyendo para más información!
Algunas de las nuevas marcas del editor, útiles para ti incluso si no usas varios hilos en tu aplicación
El problema
Si tiene más de un hilo en su aplicación, probablemente haya querido depurar la interacción del hilo. La vista de un IDE tradicional de una aplicación depurada, si bien puede enumerar varios subprocesos en una vista de subprocesos, es tratar la aplicación como si tuviera un solo subproceso: verá solo una pila de llamadas y los controles de ejecución / pausa / paso son para todo el proceso. Esto lo deja a usted, un desarrollador que tiene que depurar su aplicación, con preguntas como:
¿Cómo veo si se ejecutan varios subprocesos en el mismo código a la vez?
¿Cómo veo lo que están haciendo todos mis hilos al mismo tiempo?
¿Cómo paso por un método en un solo hilo?
Es decir, ¿cómo depuro un hilo sin que otros hilos se ejecuten y ejecuten código que no quiero que hagan, alcanzando puntos de interrupción, etc.?
¿Cuánta CPU utiliza cada uno de mis subprocesos? ¿Son eficientes?
Y quizás otras preguntas o deseos, incluso depurando solo en un solo hilo, como:
Desearía que fuera fácil ver todos los métodos en la pila de llamadas resaltados en mi código
Ojalá pudiera hacer que un punto de interrupción solo se aplique a un hilo en particular
Desearía que el marcado del editor que muestra la línea de código actual parezca un poco más obvio
Ojalá, al pausar el proceso, no me llevara a la vista de la CPU, sino que me mostrara solo mi propia fuente
Cada uno de estos es respondido por este nuevo complemento.
Repasemos sus características. En la siguiente sección, leerá sobre la vista de subprocesos paralelos y las pilas de llamadas paralelas; ejecución y paso por hilo; la vista Proceso; nuevo marcado del editor que incluye hilos y pilas de llamadas en el editor; establecer la afinidad del hilo de un punto de interrupción; mover la ejecución actual; el nuevo menú principal de Thread; y más…
La vista de subprocesos paralelos
Ver> Depurar ventanas> Subprocesos paralelos
Esta ventana enumera todos los hilos del proceso, enumerados horizontalmente. Cuando su aplicación se está ejecutando, si está en Windows, cada hilo tiene un gráfico que muestra su uso de CPU.
Cuando la aplicación está en pausa, cada hilo muestra su pila de llamadas.
A cada hilo se le asigna un color único , comenzando con un azul medio para el hilo principal de la aplicación. Este color se utiliza como guía visual para identificar el hilo en todas partes.
Se muestran los nombres de los hilos. Incluso si no nombra su hilo principal, Parallel Debugger es lo suficientemente inteligente como para detectarlo.
El hilo actual está en negrita y tiene un borde delgado de su color alrededor. Haga doble clic en el título de un hilo (su nombre) para convertirlo en el hilo actual.
Las entradas de la pila de llamadas sin código fuente, es decir, no depurables sin usar la vista de la CPU, se contraen de forma predeterminada. Puede expandirlos (o desactivarlo para mostrar una pila de llamadas tradicional).
Al pausar el proceso, el depurador siempre mostrará la entrada de la pila de llamadas depurable superior. Es decir, es posible que no muestre la entrada superior en la pila de llamadas como lo hace tradicionalmente el IDE, pero mostrará el código fuente suyo que llama a él. La idea aquí es depurar aquello sobre lo que tiene control; mostrar siempre la fuente.
El botón más a la izquierda en cada barra de herramientas de hilo pausa todo el proceso, haciendo que ese hilo sea el hilo actual. Es ‘pausar el proceso en este hilo’. El botón más a la derecha le permite cambiar el orden en que se muestran los hilos, fijándolos a la izquierda o al lado de otro hilo fijado. Esto es útil cuando tienes muchos hilos en tu aplicación y quieres mantener agrupados los que te interesan. Si un hilo fijo tiene un nombre, el anclaje es persistente en los reinicios del proceso: cuando finaliza y reinicia su aplicación, se fijarán los mismos hilos.
Ejecución o paso de un solo hilo
Los botones restantes de la barra de herramientas son para el control de la ejecución del hilo.
Los controles normales de ejecución, paso sobre, paso dentro, etc. del IDE son a nivel de proceso; es decir, ejecutarán todo el proceso, despertando todos los subprocesos, y solo espera que no ocurra nada más hasta que se complete la operación de paso en el subproceso que está viendo. En la práctica, muchos pueden (excepciones, puntos de interrupción, etc.) y, por supuesto, a veces solo desea asegurarse de que otros subprocesos no se ejecuten mientras se depura un solo subproceso.
Estos botones de la barra de herramientas le permiten correr y pisar un nivel por hilo. Usted puede:
Ejecute solo este hilo, manteniendo todos los demás hilos en pausa
Mantenga este hilo en pausa, pero ejecute todos los demás hilos
Ingrese a un método, solo en este hilo
Pase sobre una línea de código, solo en este hilo. Esto permite que solo este hilo se ejecute y pase; no se permite que otros subprocesos se ejecuten al mismo tiempo
Ejecutar hasta que el método regrese, solo para este hilo
Para usarlos, asegúrese de que el hilo que le interesa sea el hilo actual, haciendo doble clic en su título o nombre. Verá que dibuja su título en negrita.
Cada uno de estos tiene un atajo de teclado, lo que significa que puede pasar (etc.) a través del teclado, no solo haciendo clic en un botón con el mouse. Los accesos directos son visibles en el menú Tema que tiene elementos de menú para el tema actual (ver más abajo).
El control de ejecución por subproceso es una de las características más poderosas del depurador paralelo.
Integración del editor
Una tarea importante a la hora de comprender lo que sucede en una aplicación multiproceso es saber cuándo se están ejecutando varios subprocesos en la misma área de código. Parallel Debugger deja esto en claro agregando marcas en el editor para la pila de llamadas completa de cada hilo. Estos se muestran a través de ‘etiquetas’, pequeños marcadores de colores en el lado derecho del editor.
La entrada de la pila de llamadas superior, es decir, donde el hilo se está ejecutando ‘ahora’, está marcada con una etiqueta de color sólido, usando el color del hilo. Otras entradas de la pila de llamadas para el mismo hilo están en una versión desvaída del color del hilo (teniendo en cuenta que el hilo todavía está marcado con un círculo sólido en la etiqueta).
Esto le permite leer rápidamente su código y saber, ‘Thread X se está ejecutando en algún lugar dentro de esta línea de código’ y ‘Thread Y y Thread Z están ambos en el mismo método en este momento’. Incluso verá dónde están exactamente los hilos. En esta captura de pantalla, el hilo actual es el azul y un segundo hilo (rojo claro) se está ejecutando con su punto de ejecución actual dentro de IsPrime (), pero la llamada a IsPrime resaltó la línea sobre el punto de ejecución del hilo actual.
Ejecución en movimiento
Antes de instalar el complemento, el IDE solía mostrar el punto de ejecución del hilo actual con una pequeña flecha azul. Eso ahora se reemplaza con un gran cheurón en el lado izquierdo del editor de código.
Puede cambiar dónde está el punto de ejecución, desde dónde comenzará a ejecutarse el hilo la próxima vez que presione Ejecutar o Paso, simplemente haciendo clic y arrastrando este marcador.
Afinidad de subproceso de punto de interrupción
todos los hilos. Antes de instalar el complemento, los puntos de interrupción se dibujaban con un punto rojo, pero con el depurador paralelo se dan colores a los hilos, y rojo significa hilo rojo. Los puntos de interrupción que se aplican a todos los hilos ahora se dibujan como una rueda multicolor.
Para que un punto de interrupción se aplique solo a un hilo específico, haga clic con el botón derecho en el punto de interrupción. El nuevo menú Breakpoint le permite elegir un hilo al que se aplicará el breakpoint.
Aquí, este punto de interrupción solo se aplica al hilo verde.
La vista Proceso
Ver> Depurar ventanas> Proceso
conjunto. Muestra el uso de CPU a nivel de proceso (nuevamente dividido en kernel y modo de usuario), el tipo de proceso (por ejemplo, Wow64), y tiene botones de ejecución / pausa / reinicio, etc. Estos son a nivel de proceso, es decir, proporcionan la misma funcionalidad que la barra de herramientas de ejecución del propio IDE.
También puede ver una lista de hilos haciendo clic en el botón en la parte inferior, como una forma rápida de elegir el hilo actual (ya que desplazarse horizontalmente en la vista de hilos puede llevar más tiempo si tiene muchos hilos).
El menú principal de Thread
El IDE ahora tiene un menú Thread, ubicado a la derecha del menú Run. Esto proporciona un menú para la mayoría de las operaciones directas que puede realizar para un hilo. Le permite establecer el hilo de interés (es decir, el hilo actual si el proceso está en pausa, o el hilo que desea que se convierta en el hilo actual la próxima vez que haga una pausa), y para el hilo actual tiene elementos de menú para el control de ejecución del hilo. Puede ver atajos en estos elementos del menú.
También enumera cada subproceso en la aplicación, y para cada uno muestra que ejecuta el control más la fijación, efectivamente las mismas características que en la vista Subprocesos paralelos.
Nivel de característica
Finalmente, el elemento del menú superior controla el nivel de funciones del depurador paralelo: lo que hace cuando su aplicación se está ejecutando. El nivel más bajo es solo para rastrear el uso de la CPU: use esto si desea que se instale el depurador paralelo, pero no desea usarlo activamente para esta aplicación actualmente. Los siguientes dos niveles controlan la profundidad con la que el depurador rastrea las pilas de llamadas de subprocesos.
Use esto si tiene docenas o cientos de hilos. En esta situación, es probable que solo le interese un subconjunto de hilos. Establezca el nivel de función en ‘Solo pilas de llamadas seleccionadas’, y el depurador paralelo rastreará las pilas de llamadas para el subproceso principal, subprocesos anclados y el subproceso actual solo de forma predeterminada. Siempre puede obtener la pila de llamadas para cualquier subproceso haciendo clic en un botón que se muestra sobre el área de la pila de llamadas en la vista Subprocesos.
Plataformas compatibles
Parallel Debugger tiene una funcionalidad completa al depurar aplicaciones localmente en Windows.
En otras plataformas, o en la depuración remota, la funcionalidad depende de lo que admita el depurador. El uso de CPU solo es compatible con aplicaciones de Windows locales (depuración no remota). El paso o ejecución por hilo solo funcionará en plataformas que admitan la congelación de hilos. Existe un problema conocido para C ++ Win64 donde las pilas de llamadas no se pueden evaluar: esto se solucionará en una próxima versión de C ++ Builder.
Siguiendo una tradición de Parnassus que comenzó con los marcadores, el depurador paralelo realmente resuelve un error. ( RSP-29768 .)
En general: si usa Windows, el depurador tiene funcionalidad completa.
El complemento es compatible con RAD Studio 10.4.1 (y más reciente, cuando salga 10.4.2).
Obtener el depurador paralelo
¡El depurador paralelo está en GetIt ahora!
Gracias a Embarcadero, Parnassus hace que el depurador esté disponible gratuitamente para cualquier cliente de RAD Studio con una suscripción de actualización activa. Abra GetIt, vaya a la categoría Complementos IDE y haga clic en Instalar.
Tanto Parnassus como Embarcadero esperan que esta extensión sea una gran adición a su IDE.
En una nota personal, me gustaría agradecer a Embarcadero por estar interesado en el complemento y querer agregarlo a GetIt, y a todos mis probadores beta que han usado varias versiones de este complemento de calidad que aumenta lentamente desde agosto. ¡Muchas gracias a todos!
Erweitertes Code-Editor-Markup, Call-Stack-UI und Multithreading-Debugging-Tools – in einem neuen Plugin für RAD Studio, das jetzt kostenlos erhältlich ist.
Wir freuen uns sehr, heute eine neue IDE-Erweiterung für RAD Studio 10.4.1 ankündigen zu können, die vom gleichen Autor wie die beliebten Plugins Bookmarks und Navigator, Parnassus, erstellt wurde. Wenn Sie dies lesen, ist sie bereits in GetIt für jeden mit Update-Abonnement verfügbar: Öffnen Sie einfach GetIt, gehen Sie zum Abschnitt IDE-Plugins und installieren Sie den ‚Parallel Debugger‘. Er erfordert Delphi oder C++Builder 10.4.1 oder neuer.
Was ist das?
Ein Werkzeug zum Debuggen von Multithreading-Anwendungen
…und es hilft auch beim Debuggen traditioneller Singlethreading-Anwendungen!
Etwas für jeden. Lesen Sie weiter für mehr!
Einige der neuen Editor-Auszeichnungen, die für Sie nützlich sind, auch wenn Sie nicht mehrere Threads in Ihrer App verwenden!
Das Problem
Wenn Sie mehr als einen Thread in Ihrer Anwendung haben, wollten Sie wahrscheinlich schon einmal die Thread-Interaktion debuggen. Eine herkömmliche IDE betrachtet eine debuggte Anwendung zwar in einer Threads-Ansicht mit mehreren Threads, behandelt die Anwendung aber so, als hätte sie nur einen einzigen Thread: Sie sehen nur einen Aufrufstapel, und die Lauf/Pause/Schritt-Steuerungen gelten für den gesamten Prozess. Das lässt Sie, einen Entwickler, der Ihre App debuggen muss, mit Fragen zurück wie:
Wie kann ich sehen, ob mehrere Threads gleichzeitig im selben Code laufen?
Wie sehe ich, was alle meine Threads zu einem bestimmten Zeitpunkt tun?
Wie kann ich eine Methode in nur einem Thread durchlaufen?
D.h. wie debugge ich einen Thread, ohne dass andere Threads laufen und Code ausführen, den ich nicht haben will, oder Breakpoints treffen, usw.?
Wie viel CPU wird von jedem meiner Threads verwendet? Sind sie effizient?
Und vielleicht noch andere Fragen oder Wünsche, auch zum Debuggen nur in einem einzelnen Thread, wie:
Ich wünschte, es wäre einfach, alle Methoden im Aufrufstapel in meinem Code hervorgehoben zu sehen
Ich wünschte, ich könnte einen Haltepunkt einfach nur auf einen bestimmten Thread anwenden
Ich wünschte, das Editor-Markup, das die aktuelle Code-Zeile anzeigt, sähe ein bisschen deutlicher aus
Ich wünschte, ich würde beim Anhalten des Prozesses nicht in die CPU-Ansicht wechseln, sondern nur meinen eigenen Quellcode sehen
Jede dieser Fragen wird von diesem neuen Plugin beantwortet.
Schauen wir uns seine Funktionen an. Im nächsten Abschnitt erfahren Sie mehr über die Ansicht „Parallele Threads“ und parallele Aufrufstapel, die Ausführung pro Thread und Stepping, die Ansicht „Prozess“, das neue Editor-Markup mit Threads und Aufrufstapeln im Editor, das Setzen der Thread-Affinität eines Haltepunkts, das Verschieben der aktuellen Ausführung, das neue Thread-Hauptmenü und vieles mehr…
Die Ansicht „Parallele Threads
Ansicht > Debug-Fenster > Parallele Threads
In diesem Fenster werden alle Threads des Prozesses in horizontaler Reihenfolge aufgelistet. Wenn Ihre App unter Windows läuft, hat jeder Thread ein Diagramm, das seine CPU-Auslastung anzeigt.
Wenn die App pausiert wird, zeigt jeder Thread seinen Aufrufstapel an.
Jedem Thread wird eine eindeutige Farbe zugewiesen, beginnend mit einem mittleren Blau für den Hauptthread der Anwendung. Diese Farbe wird als visuelle Hilfe verwendet, um den Thread überall zu identifizieren.
Thread-Namen werden angezeigt. Selbst wenn Sie Ihren Hauptthread nicht benennen, ist Parallel Debugger intelligent genug, um ihn zu erkennen.
Der aktuelle Thread ist fett gedruckt und wird von einem dünnen Rand in seiner Farbe umgeben. Doppelklicken Sie auf den Titel eines Threads (seinen Namen), um ihn zum aktuellen Thread zu machen.
Aufrufstapel-Einträge ohne Quellcode – d. h., sie sind ohne Verwendung der CPU-Ansicht nicht debuggingfähig – sind standardmäßig eingeklappt. Sie können diese ausklappen (oder ausschalten, um einen traditionellen Aufrufstapel anzuzeigen).
Wenn der Prozess angehalten wird, zeigt der Debugger immer den obersten debuggbaren Aufrufstack-Eintrag an. Das heißt, er zeigt vielleicht nicht den obersten Eintrag im Aufrufstapel, wie es die IDE traditionell tut, aber er zeigt den Quellcode von Ihnen, der ihn aufruft. Die Idee hier ist, das zu debuggen, worüber Sie die Kontrolle haben – zeigen Sie immer den Quellcode.
Die Schaltfläche ganz links in jeder Thread-Symbolleiste pausiert den gesamten Prozess und macht diesen Thread zum aktuellen Thread. Sie heißt ‚den Prozess in diesem Thread anhalten‘. Mit der Schaltfläche ganz rechts können Sie die Reihenfolge der angezeigten Threads ändern, indem Sie Threads nach links oder neben einen anderen angehefteten Thread anheften. Dies ist nützlich, wenn Sie viele Threads in Ihrer App haben und die, an denen Sie interessiert sind, in einer Gruppe zusammenfassen möchten. Wenn ein angehefteter Thread einen Namen hat, bleibt die Anheftung über Prozessneustarts hinweg bestehen: Wenn Sie Ihre App beenden und neu starten, werden die gleichen Threads angeheftet.
Ausführen oder Steppen eines einzelnen Threads
Die verbleibenden Schaltflächen der Symbolleiste sind für die Thread-Ausführungssteuerung.
Die normalen IDE-Steuerelemente „run“, „step over“, „step into“ usw. sind auf Prozessebene angesiedelt, d. h. sie lassen den gesamten Prozess laufen, wecken alle Threads auf und Sie hoffen einfach, dass nichts weiter passiert, bis die „step over“-Operation in dem Thread, den Sie betrachten, abgeschlossen ist. In der Praxis kann viel passieren – Ausnahmen, Haltepunkte usw. – und natürlich wollen Sie manchmal einfach nur sicherstellen, dass andere Threads nicht laufen, während Sie einen einzelnen Thread debuggen.
Mit diesen Schaltflächen in der Symbolleiste können Sie auf einer Pro-Thread-Ebene laufen und treten. Sie können:
Nur diesen Thread laufen lassen, alle anderen Threads pausieren lassen
Diesen Thread pausiert halten, aber alle anderen Threads laufen lassen
Schritt in eine Methode, nur in diesem Thread
Schritt über eine Codezeile, nur in diesem Thread. Dies erlaubt nur diesem Thread zu laufen und zu schreiten; keine anderen Threads dürfen zur gleichen Zeit laufen
Ausführen bis zur Rückkehr der Methode, nur für diesen Thread
Um diese zu verwenden, vergewissern Sie sich, dass das Thema, an dem Sie interessiert sind, das aktuelle Thema ist, indem Sie auf seinen Titel oder Namen doppelklicken. Sie werden sehen, dass der Titel fett gezeichnet ist.
Jede dieser Funktionen hat einen Tastaturkurzbefehl, d. h. Sie können die Schritte (usw.) über die Tastatur ausführen, nicht nur durch Anklicken einer Schaltfläche mit der Maus. Die Tastaturkürzel sind im Menü „Thread“ sichtbar, das Menüpunkte für den aktuellen Thread enthält (siehe unten).
Die Ablaufsteuerung pro Thread ist eine der leistungsstärksten Funktionen im Parallel Debugger.
Editor-Integration
Eine wichtige Aufgabe beim Verstehen der Vorgänge in einer Multithread-Anwendung ist es, zu wissen, wann mehrere Threads im selben Codebereich ausgeführt werden. Der Parallel Debugger macht dies durch das Hinzufügen von Markierungen im Editor für den vollständigen Aufrufstapel jedes Threads deutlich. Diese werden über „Tags“, kleine farbige Markierungen auf der rechten Seite des Editors, angezeigt.
Der oberste Call-Stack-Eintrag – also derjenige, an dem der Thread „jetzt“ ausgeführt wird – wird mit einem einfarbigen Tag in der Thread-Farbe markiert. Andere Aufrufstapel-Einträge für denselben Thread sind in einer verblassten Version der Thread-Farbe dargestellt (wobei der Thread immer noch mit einem durchgehenden Kreis im Tag markiert ist).
So können Sie Ihren Code schnell lesen und wissen: „Thread X wird irgendwo in dieser Codezeile ausgeführt“ und „Thread Y und Thread Z befinden sich beide gerade in der gleichen Methode“. Sie sehen sogar, wo sich die Threads genau befinden. In diesem Screenshot ist der aktuelle Thread der blaue, und ein zweiter Thread (hellrot) wird mit seinem aktuellen Ausführungspunkt innerhalb von IsPrime() ausgeführt, aber der Aufruf von IsPrime hat die Zeile über dem Ausführungspunkt des aktuellen Threads hervorgehoben.
Verschieben der Ausführung
Vor der Installation des Plugins hat die IDE den Ausführungspunkt des aktuellen Threads mit einem kleinen blauen Pfeil angezeigt. Dieser wird nun durch einen großen Chevron auf der linken Seite des Code-Editors ersetzt.
Sie können die Position des Ausführungspunkts ändern, d. h. die Stelle, an der der Thread das nächste Mal ausgeführt wird, wenn Sie auf „Run“ oder „Step“ klicken, indem Sie einfach auf diese Markierung klicken und sie verschieben.
Haltepunkt-Thread-Affinität
Standardmäßig gelten Haltepunkte für alle Threads. Vor der Installation des Plugins wurden Haltepunkte mit einem roten Punkt gezeichnet, aber mit dem Parallelen Debugger werden Threads Farben gegeben, und Rot bedeutet den roten Thread. Haltepunkte, die für alle Threads gelten, werden jetzt als ein mehrfarbiges Rad gezeichnet.
Um einen Haltepunkt nur für einen bestimmten Thread gelten zu lassen, klicken Sie mit der rechten Maustaste auf den Haltepunkt. Im neuen Haltepunktmenü können Sie einen Thread auswählen, für den der Haltepunkt gelten soll.
Hier gilt dieser Haltepunkt nur für den grünen Thread.
Die Prozessansicht
Ansicht > Debug-Fenster > Prozess
Dieses Fenster zeigt Informationen über den Prozess als Ganzes an. Es zeigt die CPU-Auslastung auf Prozessebene (wiederum aufgeteilt in Kernel- und Usermode), den Prozesstyp (z. B. Wow64) und hat Schaltflächen für Ausführen/Pause/Zurücksetzen usw. Diese sind auf Prozessebene, d.h. sie bieten die gleiche Funktionalität wie die IDE-eigene Ausführen-Symbolleiste.
Sie können auch eine Liste von Threads sehen, indem Sie auf die Schaltfläche am unteren Rand klicken, als schnelle Möglichkeit, den aktuellen Thread auszuwählen (da das horizontale Scrollen in der Threads-Ansicht länger dauern kann, wenn Sie viele Threads haben).
Das Thread-Hauptmenü
Die IDE verfügt jetzt über ein Thread-Menü, das sich rechts neben dem Ausführen-Menü befindet. Dieses bietet ein Menü für die meisten der direkten Operationen, die Sie für einen Thread durchführen können. Es lässt Sie den Thread von Interesse einstellen (d. h. den aktuellen Thread, wenn der Prozess angehalten ist, oder den Thread, der beim nächsten Anhalten zum aktuellen Thread werden soll), und für den aktuellen Thread gibt es Menüpunkte für die Thread-Laufkontrolle. Auf diesen Menüeinträgen sind Verknüpfungen zu sehen.
Außerdem werden alle Threads in der App aufgelistet, und für jeden wird die Ablaufsteuerung und das Anheften angezeigt, also praktisch die gleichen Funktionen wie in der Ansicht „Parallele Threads“.
Merkmal Level
Schließlich steuert der oberste Menüpunkt die Funktionsstufe des Parallel-Debuggers: was er tut, wenn Ihre App läuft. Die niedrigste Ebene dient nur zum Verfolgen der CPU-Auslastung: Verwenden Sie dies, wenn Sie den parallelen Debugger installieren möchten, ihn aber derzeit nicht aktiv für diese App verwenden wollen. Die nächsten beiden Stufen steuern, wie tief der Debugger Thread-Aufrufstapel verfolgt.
Verwenden Sie dies, wenn Sie Dutzende oder Hunderte von Threads haben. In dieser Situation sind Sie wahrscheinlich nur an einer Teilmenge von Threads interessiert. Wenn Sie die Funktionsstufe auf „Nur ausgewählte Aufrufstapel“ einstellen, verfolgt der parallele Debugger standardmäßig nur die Aufrufstapel für den Hauptthread, die angehefteten Threads und den aktuellen Thread. Sie können jederzeit den Aufrufstapel für jeden Thread abrufen, indem Sie auf eine Schaltfläche klicken, die über dem Aufrufstapelbereich in der Threads-Ansicht angezeigt wird.
Unterstützte Plattformen
Der Parallel Debugger hat die volle Funktionalität beim lokalen Debuggen von Anwendungen unter Windows.
Auf anderen Plattformen oder beim Remote-Debugging hängt die Funktionalität davon ab, was der Debugger unterstützt. Die CPU-Nutzung wird nur für lokale (nicht entfernt debuggende) Windows-Anwendungen unterstützt. Per-Thread-Stepping oder -Lauf wird nur auf Plattformen funktionieren, die Thread-Freezing unterstützen. Es gibt ein bekanntes Problem für C++ Win64, bei dem Aufrufstapel nicht ausgewertet werden können: dies wird in einer kommenden C++Builder-Version behoben.
Einer mit Bookmarks begonnenen Parnassus-Tradition folgend, behebt der Parallel Debugger tatsächlich einen Fehler! (RSP-29768.)
Generell gilt: Wenn Sie Windows verwenden, hat der Debugger volle Funktionalität.
Das Plugin unterstützt RAD Studio 10.4.1 (und neuer, wenn 10.4.2 herauskommt.)
Den parallelen Debugger erhalten
Der Parallele Debugger ist jetzt in GetIt!
Dank Embarcadero stellt Parnassus den Debugger jedem RAD Studio-Kunden mit aktivem Update-Abonnement kostenlos zur Verfügung. Öffnen Sie GetIt, gehen Sie zur Kategorie IDE-Plugins und klicken Sie auf Installieren.
Sowohl Parnassus als auch Embarcadero hoffen, dass Sie diese Erweiterung als großartige Ergänzung zu Ihrer IDE empfinden werden.
Persönlich möchte ich mich bei Embarcadero dafür bedanken, dass sie sich für das Plugin interessiert haben und es zu GetIt hinzufügen wollten, und bei all meinen Beta-Testern, die seit August verschiedene Versionen dieses Plugins von langsam steigender Qualität verwendet haben. Herzlichen Dank an alle!
На выставке Embarcadero Showcase представлен ряд замечательных программных решений, созданных нашими клиентами. Мы постоянно добавляем новые витрины. Цель демонстрации — показать успехи, которых достигают наши клиенты при использовании наших инструментов и решений, таких как RAD Studio, Delphi, C ++ Builder, InterBase и RAD Server. Вы можете представить свою собственную витрину, и мы можем показать ее, если будут доступны все необходимые материалы.
Отправить новую витрину очень просто. Просто посетите Форму подачи заявкина витрину и введите необходимую информацию и средства массовой информации. Краткое описание приложения, описывающее, как и почему программное решение использует Delphi, является оптимальным. Чтобы получить оценку для демонстрации, заявка должна включать доступ к скриншотам с разрешением 1080p или общедоступную загрузку (например, бесплатную пробную версию), чтобы можно было создавать скриншоты с высоким разрешением. Кроме того, их нужно использовать с последней версией Windows. Для мобильных приложений укажите URL-адреса App Store в описании и любые дополнительные снимки экрана, которые у вас могут быть. Вы также можете отправить URL-адрес YouTube с изображением вашего программного решения, и он может быть включен в демонстрацию.
Вот несколько существующих витрин, которые вы можете увидеть ниже.
WinSCP — это популярный отмеченный наградами собственный клиент SFTP, FTP-клиент и файловый менеджер для Microsoft Windows. Его скачали более 143 миллионов раз, и он доступен на многих языках. WinSCP построен на нескольких языках программирования, но C ++ Builder является основным инструментом, обеспечивающим его графический пользовательский интерфейс. Это отличный пример использования быстрой разработки пользовательского интерфейса C ++ Builder…
Broken Games — небольшая амбициозная независимая компания по разработке игр, базирующаяся в Берлине, Германия. Их флагманская игра Rise of Legions — это многопользовательская ролевая игра, доступная для платформы Windows. Соучредители Тобиас и Мартин сосредоточены на объединении людей с помощью игры …
EarMaster — это комплексное приложение потребительского уровня с необычайной функциональностью, в котором используются самые разные технологии. В нем почти 3000 уроков, созданных учителями музыки для начинающих и профессиональных музыкантов, играющих на любом инструменте. Несмотря на технологически продвинутую внутреннюю часть приложения, команда EarMaster много работала, чтобы сделать его максимально простым и интуитивно понятным в использовании. Возможности EarMaster…
O Embarcadero Showcase apresenta uma série de diferentes soluções de software incríveis criadas por nossos clientes. Estamos adicionando novos mostruários o tempo todo. O objetivo do showcase é destacar os sucessos que nossos clientes estão obtendo ao usar nossas ferramentas e soluções, como RAD Studio, Delphi, C ++ Builder, InterBase e RAD Server. Você pode enviar seu próprio mostruário e poderemos apresentá-lo se toda a mídia necessária estiver disponível.
Enviar uma nova demonstração é fácil. Basta visitar o Formulário de envio de demonstração e inserir as informações e mídia necessárias. Uma breve descrição do aplicativo que descreve como e por que a solução de software usa Delphi é ideal. Para ser avaliado para um Showcase, o envio deve incluir acesso a capturas de tela com resolução de 1080p ou um download público (como uma avaliação gratuita) para que capturas de tela de alta resolução possam ser criadas. Além disso, eles precisam ser obtidos com a versão mais recente do Windows. Para aplicativos móveis, forneça os URLs da App Store na descrição e quaisquer capturas de tela extras que você possa ter. Você também pode enviar um URL do YouTube com sua solução de software e ele pode ser incluído no Showcase.
Aqui estão alguns Showcases existentes que você pode conferir abaixo.
WinSCP é um cliente SFTP nativo, cliente FTP e gerenciador de arquivos popular e premiado para Microsoft Windows. Ele foi baixado mais de 143 milhões de vezes e está disponível em vários idiomas. O WinSCP é construído com uma série de linguagens de programação, mas C ++ Builder é a principal ferramenta para sua interface gráfica de usuário. É um ótimo exemplo de como aproveitar o rápido desenvolvimento de IU do C ++ Builder …
Broken Games é uma pequena e ambiciosa empresa independente de desenvolvimento de jogos com sede em Berlim, Alemanha. Seu principal jogo, Rise of Legions, é um RPG multijogador disponível para a plataforma Windows. Os cofundadores Tobias e Martin se concentram em unir as pessoas por meio do jogo …
Com quase 3.000 aulas criadas por professores de música para iniciantes a músicos profissionais tocando qualquer instrumento, EarMaster é um aplicativo abrangente para o consumidor com funcionalidade extraordinária que usa uma variedade de tecnologias diferentes. Apesar do backend tecnologicamente avançado do aplicativo, a equipe EarMaster trabalhou duro para torná-lo o mais simples e intuitivo possível de usar. Recursos do EarMaster …
Embarcadero Showcase presenta una variedad de increíbles soluciones de software creadas por nuestros clientes. Estamos agregando nuevas vitrinas todo el tiempo. El propósito de la exhibición es resaltar los éxitos que nuestros clientes están disfrutando al usar nuestras herramientas y soluciones como RAD Studio, Delphi, C ++ Builder, InterBase y RAD Server. Puede enviar su propio escaparate y podríamos presentarlo si todos los medios necesarios están disponibles.
Enviar una nueva presentación es fácil. Simplemente visite el Formulario de presentación de presentaciones e ingrese la información y los medios requeridos. Una breve descripción de la aplicación que describe cómo y por qué la solución de software utiliza Delphi es óptima. Para ser evaluado para un Showcase, el envío debe incluir acceso a capturas de pantalla de resolución 1080p o una descarga pública (como una prueba gratuita) para que se puedan crear capturas de pantalla de alta resolución. Además, deben tomarse con la última versión de Windows. Para aplicaciones móviles, proporcione las URL de la App Store en la descripción y las capturas de pantalla adicionales que pueda tener. También puede enviar una URL de YouTube con su solución de software y es posible que se incluya en la Presentación.
Aquí hay algunas vitrinas existentes que puede consultar a continuación.
WinSCP es un popular cliente SFTP nativo, cliente FTP y administrador de archivos para Microsoft Windows. Se ha descargado más de 143 millones de veces y está disponible en muchos idiomas. WinSCP está construido con varios lenguajes de programación, pero C ++ Builder es la herramienta principal que impulsa su interfaz gráfica de usuario. Es un gran ejemplo de cómo aprovechar el rápido desarrollo de la interfaz de usuario de C ++ Builder …
Broken Games es una pequeña y ambiciosa empresa de desarrollo de juegos independiente con sede en Berlín, Alemania. Su juego insignia, Rise of Legions, es un juego de rol multijugador disponible para la plataforma Windows. Los cofundadores Tobias y Martin se centran en unir a las personas a través del juego …
Con casi 3.000 lecciones creadas por profesores de música para principiantes y músicos profesionales que tocan cualquier instrumento, EarMaster es una aplicación completa para el consumidor con una funcionalidad extraordinaria que utiliza una variedad de tecnologías diferentes. A pesar del backend tecnológicamente avanzado de la aplicación, el equipo de EarMaster trabajó arduamente para que su uso sea lo más simple e intuitivo posible. Características de EarMaster …
Der Embarcadero Showcase zeigt eine Reihe von verschiedenen Softwarelösungen, die von unseren Kunden erstellt wurden. Wir fügen laufend neue Showcases hinzu. Der Zweck des Showcase ist es, die Erfolge unserer Kunden hervorzuheben, die mit unseren Tools und Lösungen wie RAD Studio, Delphi, C++Builder, InterBase und RAD Server erzielt werden. Sie können Ihren eigenen Showcase einreichen und wir werden ihn möglicherweise veröffentlichen, wenn alle erforderlichen Medien vorhanden sind.
Das Einreichen eines neuen Showcase ist ganz einfach. Besuchen Sie einfach das Showcase-Einreichungsformular und geben Sie die erforderlichen Informationen und Medien ein. Eine kurze Beschreibung der App, die beschreibt, wie und warum die Softwarelösung Delphi nutzt, ist optimal. Um für einen Showcase bewertet zu werden, muss die Einreichung Zugang zu Screenshots in 1080p-Auflösung oder einen öffentlichen Download (z.B. eine kostenlose Testversion) enthalten, damit hochauflösende Screenshots erstellt werden können. Außerdem müssen sie mit der neuesten Version von Windows aufgenommen worden sein. Bei mobilen Apps geben Sie bitte die App-Store-URLs in der Beschreibung an und fügen Sie ggf. zusätzliche Screenshots bei. Sie können auch eine YouTube-URL mit Ihrer Softwarelösung einreichen, die in den Showcase aufgenommen werden kann.
Hier sind ein paar bestehende Showcases, die Sie sich ansehen können.
WinSCP ist ein beliebter, preisgekrönter nativer SFTP-Client, FTP-Client und Dateimanager für Microsoft Windows. Er wurde bereits über 143 Millionen Mal heruntergeladen und ist in vielen Sprachen verfügbar. WinSCP wurde mit einer Reihe von Programmiersprachen entwickelt, aber C++Builder ist das Hauptwerkzeug für die grafische Benutzeroberfläche. Es ist ein großartiges Beispiel für die Nutzung von C++Builders schneller UI-Entwicklung…
Broken Games ist eine kleine, aufstrebende, unabhängige Spielentwicklungsfirma mit Sitz in Berlin. Ihr Flaggschiff, Rise of Legions, ist ein Multiplayer-Rollenspiel für die Windows-Plattform. Die Mitgründer Tobias und Martin konzentrieren sich darauf, Menschen durch Spiele zusammenzubringen…
Mit fast 3.000 Lektionen, die von Musiklehrern für Anfänger bis hin zu professionellen Musikern, die ein beliebiges Instrument spielen, erstellt wurden, ist EarMaster eine umfassende App in Verbraucherqualität mit außergewöhnlicher Funktionalität, die eine Vielzahl verschiedener Technologien nutzt. Trotz des technologisch fortschrittlichen Backends der App hat das EarMaster-Team hart daran gearbeitet, die Anwendung so einfach und intuitiv wie möglich zu gestalten. EarMaster bietet…
Sind Sie bereit, Ihren eigenen Showcase einzureichen?
En esta breve presentación a continuación, traigo una revisión de nuestro Roadmap actualizado recientemente. Sabemos lo importante que es para nuestros clientes tener una visión clara de hacia dónde vamos con el producto, por eso creo que esta actualización es algo que realmente les gustará.
Además, esta presentación cubre las tres soluciones WEB que forman parte de nuestra oferta actual Free Web Pack. La presentación muestra un poco sobre cada framework (Intraweb, uniGUI, TMS Web Core) para ayudarlo a comprender sus capacidades y elegir el que mejor se adapte a sus requisitos.
¡Realmente espero que lo disfrutes y no dudes en contactar nos en caso de que necesites información adicional o alguna ayuda para seguir adelante con tus proyectos!
Y finalmente, aquí se puede encontrar la presentación en formato PDF con todos los enlaces:
Мы рады пригласить всех наших клиентов RAD Studio с активной подпиской на бета-программу NDA для выпуска Embarcadero 10.4.2 Delphi, C ++ Builder и RAD Studio под кодовым названием «Hunter». RAD Studio 10.4.2 основывается на замечательных функциях, представленных в RAD Studio 10.4 и 10.4.1, и добавляет новые функции и улучшения во всем продукте.
Чтобы узнать больше о возможностях, которые мы запланировали для выпуска 10.4.2, обратитесь к записи блога RAD Studio Roadmap PM Commentary за ноябрь 2020 г. (обратите внимание, что функции, упомянутые в сообщении блога, не используются до тех пор, пока не будут завершены и не выпущены GA). После того, как вы присоединились к бета-версии, вы получите дополнительную документацию с подробным описанием функций каждой бета-сборки.
Как присоединиться:
Для участия в бета — версии, пожалуйста , укажите ваше имя и адрес электронной почты , связанный с вашей подпиской на обновления (электронной почты , который вы использовали для регистрации продукта) , используя эту ф ОРМ во вторник, 15 декабря 2020 года.
После того, как вы предоставите свой адрес электронной почты, во второй половине декабря вы получите письмо со ссылкой для электронной подписи NDA бета-версии Hunter. После подписания NDA вам будет предоставлена информация, необходимая для участия в бета-версии 10.4.2. Обратите внимание, что бета-версии 10.4.2 не могут быть установлены на том же компьютере, что и ваша текущая установка 10.4 или 10.4.1 Sydney (также мы обычно не рекомендуем устанавливать бета-версии на производственную машину).
У вас нет подписки, но вы заинтересованы в участии в бета-версии 10.4.2? Обратитесь к торговому представителю Embarcadero или торговому посреднику, чтобы продлить подписку и получить приглашение присоединиться к бета-программе.
Temos o prazer de convidar todos os nossos clientes RAD Studio com uma assinatura ativa do programa NDA beta para a versão 10.4.2 do Delphi, C ++ Builder e RAD Studio da Embarcadero, codinome “Hunter”. O RAD Studio 10.4.2 se baseia nos excelentes recursos introduzidos no RAD Studio 10.4 e 10.4.1 e adiciona novos recursos e aprimoramentos em todo o produto.
Para saber mais sobre os recursos que planejamos para a versão 10.4.2, consulte a postagem do blog RAD Studio November 2020 Roadmap PM Commentary (observe que os recursos mencionados na postagem do blog não são confirmados até que sejam concluídos e o GA seja liberado). Depois de ingressar no beta, você receberá documentação adicional detalhando os recursos de cada build beta.
Como entrar:
Para participar do beta, forneça o seu nome e o endereço de email associado à sua actualização de subscrição (o e-mail que você usou para registrar o produto) usando este f orm pelo terça-feira, 15 de dezembro, 2020 .
Depois de fornecer seu endereço de e-mail, você receberá um e-mail de acompanhamento na segunda quinzena de dezembro com um link para assinar eletronicamente o Hunter Beta NDA. Depois de assinar o NDA, você receberá as informações necessárias para participar do beta 10.4.2. Observe que as compilações beta 10.4.2 não podem ser instaladas na mesma máquina que sua instalação atual de Sydney 10.4 ou 10.4.1 (também, geralmente não recomendamos instalar versões beta em uma máquina de produção).
Não está atualizado com a assinatura, mas está interessado em ingressar no beta 10.4.2? Entre em contato com seu representante de vendas ou revendedor da Embarcadero para renovar sua assinatura e ser convidado a participar do programa beta.
Nos complace invitar a todos nuestros clientes de RAD Studio con una suscripción activa al programa beta de NDA para la versión 10.4.2 de Embarcadero de Delphi, C ++ Builder y RAD Studio, con nombre en código “Hunter”. RAD Studio 10.4.2 se basa en las excelentes características introducidas en RAD Studio 10.4 y 10.4.1, y agrega nuevas características y mejoras en todo el producto.
Para obtener más información sobre las capacidades que hemos planeado para la versión 10.4.2, consulte la publicación de blog de comentarios de PM de la hoja de ruta de RAD Studio de noviembre de 2020 (tenga en cuenta que las funciones mencionadas en la publicación del blog no se comprometen hasta que se completen y se publique GA). Una vez que se haya unido a la versión beta, recibirá documentación adicional que detalla las características de cada versión beta.
Como unirse:
Para participar en la beta, por favor proporcione su nombre y la dirección de correo electrónico asociada a tu suscripción de actualización (el correo electrónico que utilizó para registrar el producto) usando esta forma para el martes, 15 de diciembre de, año 2020.
Una vez que haya proporcionado su dirección de correo electrónico, recibirá un correo electrónico de seguimiento en la segunda quincena de diciembre con un enlace para firmar electrónicamente el Hunter Beta NDA. Después de firmar el NDA, se le proporcionará la información necesaria para participar en la versión beta de 10.4.2. Tenga en cuenta que las compilaciones beta 10.4.2 no se pueden instalar en la misma máquina que su instalación actual de 10.4 o 10.4.1 Sydney (también, generalmente recomendamos no instalar versiones beta en una máquina de producción).
¿No estás al día con la suscripción pero estás interesado en unirte a la versión beta 10.4.2? Comuníquese con su representante de ventas o socio revendedor de Embarcadero para renovar su suscripción y ser invitado a unirse al programa beta.
Wir freuen uns, alle RAD Studio-Kunden mit einem aktiven Abonnement zum NDA-Betaprogramm für die Version 10.4.2 von Delphi, C++Builder und RAD Studio von Embarcadero, Codename „Hunter“, einzuladen. RAD Studio 10.4.2 baut auf den großartigen Funktionen auf, die in RAD Studio 10.4 und 10.4.1 eingeführt wurden, und fügt neue Funktionen und Verbesserungen im gesamten Produkt hinzu.
Um mehr über die Funktionen zu erfahren, die wir für die Version 10.4.2 geplant haben, lesen Sie bitte den Blogbeitrag RAD Studio November 2020 Roadmap PM Commentary (bitte beachten Sie, dass die in dem Blogbeitrag erwähnten Funktionen erst nach Fertigstellung und GA-Freigabe verbindlich sind). Nachdem Sie der Beta beigetreten sind, erhalten Sie eine zusätzliche Dokumentation, in der die Funktionen der einzelnen Beta-Builds detailliert beschrieben sind.
Wie man beitritt:
Um an der Beta teilnehmen, geben Sie bitte Ihren Namen und die E – Mail – Adresse Ihr Update – Abonnement (die E – Mail den Sie das Produkt registrieren) mit diesem form bis Dienstag, 15. Dezember 2020.
Sobald Sie Ihre E-Mail-Adresse angegeben haben, erhalten Sie in der zweiten Dezemberhälfte eine Folge-E-Mail mit einem Link zur elektronischen Unterzeichnung der Hunter Beta NDA. Nach der Unterzeichnung der NDA erhalten Sie die Informationen, die Sie für die Teilnahme an der Beta 10.4.2 benötigen. Bitte beachten Sie, dass 10.4.2 Beta-Builds nicht auf demselben Computer wie Ihre aktuelle 10.4- oder 10.4.1 Sydney-Installation installiert werden können (wir empfehlen außerdem generell, Beta-Versionen nicht auf einem Produktionscomputer zu installieren).
Sie haben kein aktuelles Abonnement, sind aber daran interessiert, an der 10.4.2 Beta-Version teilzunehmen? Wenden Sie sich an Ihren Embarcadero-Vertriebsmitarbeiter oder -Handelspartner, um Ihr Abonnement zu verlängern und zur Teilnahme am Beta-Programm eingeladen zu werden.
Online exam system is a web application that can be used education purpose. The exam system allows to create complete exams with questions and its options. The exam access are given to users to complete exams in a given time. The system also allows its users to see their exam result.
The online exam systems are always in demand because most of exams or test are conducted online. So if you’re a developer and looking for solution to develop your online exam system then you’re here the right place. In this tutorial you will learn how to develop your own online exam system with PHP and MySQL.
Here we will develop a online exam system and cover following.
The Administrator can do the following:
Add/Edit Exams with questions and options.
Manage users.
The users can do the following:
Enroll to Exams.
View Own Exams.
Complete Exams.
View Exam Results.
So let’s start developing CRM system with PHP and MySQL. The major files are:
user.php
exam.php
enroll.php
questions.php
view.php
process_exam.php
User.php: A class contains users methods.
Exam.php: A class contains methods related to exams.
Questions.php: A class contains methods related to questions.
Step1: Create MySQL Database Table
First we will create MySQL database tables for our online exam system. The major tables are following.
We will create online_exam_user table to store users information.
CREATE TABLE `online_exam_user` (
`id` int(11) UNSIGNED NOT NULL,
`first_name` varchar(255) DEFAULT NULL,
`last_name` varchar(255) DEFAULT NULL,
`gender` enum('Male','Female') NOT NULL,
`email` varchar(255) DEFAULT NULL,
`password` varchar(64) NOT NULL,
`mobile` varchar(12) NOT NULL,
`address` text NOT NULL,
`created` datetime NOT NULL DEFAULT current_timestamp(),
`role` enum('user','admin') NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
We will create online_exam_exams table to store exams information.
CREATE TABLE `online_exam_exams` (
`id` int(11) NOT NULL,
`user_id` int(11) NOT NULL,
`exam_title` varchar(250) NOT NULL,
`exam_datetime` datetime NOT NULL,
`duration` varchar(30) NOT NULL,
`total_question` int(5) NOT NULL,
`marks_per_right_answer` varchar(30) NOT NULL,
`marks_per_wrong_answer` varchar(30) NOT NULL,
`created_on` datetime NOT NULL,
`status` enum('Pending','Created','Started','Completed') NOT NULL,
`exam_code` varchar(100) NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
We will create online_exam_question table to store exams questions information.
CREATE TABLE `online_exam_question` (
`id` int(11) NOT NULL,
`exam_id` int(11) NOT NULL,
`question` text NOT NULL,
`answer` enum('1','2','3','4') NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
We will create online_exam_option table to store questions options information.
CREATE TABLE `online_exam_option` (
`id` int(11) NOT NULL,
`question_id` int(11) NOT NULL,
`option` int(2) NOT NULL,
`title` varchar(250) NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
and we will create online_exam_question_answer table to store exam questions answer information.
CREATE TABLE `online_exam_question_answer` (
`id` int(11) NOT NULL,
`user_id` int(11) NOT NULL,
`exam_id` int(11) NOT NULL,
`question_id` int(11) NOT NULL,
`user_answer_option` enum('0','1','2','3','4') NOT NULL,
`marks` varchar(20) NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
Step2: Manage Exam Section
We will implement functionality in exam.php file to manage exams. We will create HTML to add, edit and delete exams.
We will implement the method delete() in class Exam.php to delete the exam.
public function delete(){
if($this->id && $_SESSION["userid"]) {
$stmt = $this->conn->prepare("
DELETE FROM ".$this->examTable."
WHERE id = ? AND user_id = ?");
$this->id = htmlspecialchars(strip_tags($this->id));
$stmt->bind_param("ii", $this->id, $_SESSION["userid"]);
if($stmt->execute()){
return true;
}
}
}
We will also implement the method listExam() in class Exam.php to list all exam.
public function listExam(){
$sqlQuery = "
SELECT id, exam_title, exam_datetime, duration, total_question, marks_per_right_answer, marks_per_wrong_answer, status
FROM ".$this->examTable."
WHERE user_id = '".$_SESSION["userid"]."' ";
if(!empty($_POST["search"]["value"])){
$sqlQuery .= ' AND (name LIKE "%'.$_POST["search"]["value"].'%" ';
$sqlQuery .= ' OR email LIKE "%'.$_POST["search"]["value"].'%" ';
$sqlQuery .= ' OR gender LIKE "%'.$_POST["search"]["value"].'%" ';
$sqlQuery .= ' OR mobile LIKE "%'.$_POST["search"]["value"].'%" ';
$sqlQuery .= ' OR address LIKE "%'.$_POST["search"]["value"].'%" ';
$sqlQuery .= ' OR age LIKE "%'.$_POST["search"]["value"].'%") ';
}
if(!empty($_POST["order"])){
$sqlQuery .= 'ORDER BY '.$_POST['order']['0']['column'].' '.$_POST['order']['0']['dir'].' ';
} else {
$sqlQuery .= 'ORDER BY id ASC ';
}
if($_POST["length"] != -1){
$sqlQuery .= 'LIMIT ' . $_POST['start'] . ', ' . $_POST['length'];
}
$stmt = $this->conn->prepare($sqlQuery);
$stmt->execute();
$result = $stmt->get_result();
$stmtTotal = $this->conn->prepare("SELECT * FROM ".$this->examTable." WHERE id = '".$_SESSION["userid"]."'");
$stmtTotal->execute();
$allResult = $stmtTotal->get_result();
$allRecords = $allResult->num_rows;
$displayRecords = $result->num_rows;
$records = array();
while ($exam = $result->fetch_assoc()) {
$rows = array();
$rows[] = $exam['id'];
$rows[] = $exam['exam_title'];
$rows[] = $exam['exam_datetime'];
$rows[] = $exam['duration'];
$rows[] = $exam['total_question'];
$rows[] = $exam['marks_per_right_answer'];
$rows[] = $exam['marks_per_wrong_answer'];
$rows[] = $exam['status'];
$rows[] = '<a type="button" name="view" href="questions.php?exam_id='.$exam["id"].'" class="btn btn-info btn-xs add_question"><span class="glyphicon" title="Add Question">Questions</span></a>';
$rows[] = '<a type="button" name="update" href="enroll.php?exam_id='.$exam["id"].'" class="btn btn-primary btn-xs enroll"><span class="glyphicon glyphicon-user" title="Enroll">Enroll</span></a>';
$rows[] = '<button type="button" name="delete" id="'.$exam["id"].'" class="btn btn-success btn-xs result" ><span class="glyphicon" title="Result">Result</span></button>';
$rows[] = '<button type="button" name="update" id="'.$exam["id"].'" class="btn btn-warning btn-xs update"><span class="glyphicon glyphicon-edit" title="Edit"></span></button>';
$rows[] = '<button type="button" name="delete" id="'.$exam["id"].'" class="btn btn-danger btn-xs delete" ><span class="glyphicon glyphicon-remove" title="Delete"></span></button>';
$records[] = $rows;
}
$output = array(
"draw" => intval($_POST["draw"]),
"iTotalRecords" => $displayRecords,
"iTotalDisplayRecords" => $allRecords,
"data" => $records
);
echo json_encode($output);
}
Step3: Manage Exam Questions
We will create the HTML to implement the functionaloity in questions.php to add, edit and delete the questions. We will also handle the functionality to add question options.
We will implement the method enrollToExam() in class Exam.php to enroll exam to user to complete it.
public function enrollToExam(){
if($this->exam_id) {
$stmt = $this->conn->prepare("
INSERT INTO ".$this->enrollTable."(`user_id`, `exam_id`)
VALUES(?,?)");
$this->exam_id = htmlspecialchars(strip_tags($this->exam_id));
$stmt->bind_param("ii", $_SESSION["userid"], $this->exam_id);
if($stmt->execute()){
$stmtAnswer = $this->conn->prepare("
INSERT INTO ".$this->questionAnswerTable."(`user_id`, `exam_id`, `question_id`)
VALUES(?,?,?)");
$sqlQuery = "
SELECT id
FROM ".$this->questionTable."
WHERE exam_id = ?";
$stmtQuestion = $this->conn->prepare($sqlQuery);
$stmtQuestion->bind_param("i", $this->exam_id);
$stmtQuestion->execute();
$result = $stmtQuestion->get_result();
while ($question = $result->fetch_assoc()) {
$stmtAnswer->bind_param("iii", $_SESSION["userid"], $this->exam_id, $question['id']);
$stmtAnswer->execute();
}
return true;
}
}
}
Step5: Implement View Exam
We will create the HTML to display exam questions with options to complete the exam. We will also display the question time with timer to display the exam remaining time to complete the exam within that time.
Если вы разработчик и задаетесь вопросом, стоит ли вам взглянуть на Enterprise или Architect Edition, (Что еще вы получаете? Полезно ли? Что еще вы можете сделать с Architect? Как использовать его для ускорения разработки?) .. этот вебинар для вас. Присоединяйтесь к команде Embarcadero и узнайте из первых рук, как разрабатывать быстрее с помощью дополнительных функций, включенных в версию RAD Studio Architect.
На этом заседании Стивен Болл, Мэри Келли и Алекс Руис обсудят и продемонстрируют высокопроизводительные инструменты и дополнительные функции версии Architect, а затем ответят на открытые вопросы и ответы. Сессия будет охватывать несколько этапов цикла разработки, показывая, как Aqua Data Studio, Sencha Architect, Ranorex, RAD Server и FireMonkey обеспечивают обнаружение данных, тестирование приложений, разработку среднего уровня, мобильную и веб-разработку.
15:00 GMT (Лондон)
10:00 EST (Нью-Йорк)
9:00 CST (Остин)
Вы знали!
Прямо сейчас, в течение ограниченного времени, вы можете купить версию Architect по цене Enterprise !! Ознакомьтесь с предложениями RAD для получения более подробной информации
Se você é um desenvolvedor e está se perguntando se deveria dar uma olhada na Enterprise ou Architect Edition, (qual extra você ganha? É útil? O que mais você pode fazer com o Architect? Como usá-lo para acelerar seu desenvolvimento?) Bem .. este webinar é para você. Junte-se à equipe da Embarcadero e veja em primeira mão como desenvolver mais rapidamente com os recursos extras incluídos na edição RAD Studio Architect.
Durante esta sessão, Stephen Ball, Mary Kelly e Alex Ruiz discutirão e demonstrarão as ferramentas altamente produtivas e recursos extras da edição Architect seguida por perguntas e respostas abertas. A sessão cobrirá várias fases do ciclo de desenvolvimento, mostrando como Aqua Data Studio, Sencha Architect, Ranorex, RAD Server e FireMonkey permitem a descoberta de dados, teste de aplicativo, camada intermediária, desenvolvimento móvel e web.
Si eres un desarrollador y te preguntas si deberías mirar la edición Enterprise o Architect, (¿Qué extra obtienes? ¿Es útil? ¿Qué más puedes hacer con Architect? ¿Cómo usarlo para acelerar tu desarrollo?) .. este seminario web es para usted. Únase al equipo de Embarcadero y vea de primera mano cómo desarrollar más rápido con las funciones adicionales incluidas en la edición RAD Studio Architect.
A través de esta sesión, Stephen Ball, Mary Kelly y Alex Ruiz discutirán y demostrarán las herramientas altamente productivas y las características adicionales de la edición Architect, seguidas de preguntas y respuestas abiertas. La sesión cubrirá múltiples fases del ciclo de desarrollo, mostrando cómo Aqua Data Studio, Sencha Architect, Ranorex, RAD Server y FireMonkey permiten el descubrimiento de datos, las pruebas de aplicaciones, el desarrollo de nivel medio, móvil y web.
3 p.m. GMT (Londres)
10 a.m. EST (Nueva York)
9 a.m. CST (Austin)
¡Sabías!
¡Ahora mismo, solo por tiempo limitado, puede comprar la edición Architect por el precio de Enterprise ! Consulte las ofertas de RAD para obtener más detalles
Wenn Sie ein Entwickler sind und sich fragen, ob Sie sich die Enterprise oder Architect Edition ansehen sollten, (Was bekommen Sie zusätzlich? Ist es nützlich? Was können Sie mit Architect noch tun? Wie können Sie damit Ihre Entwicklung beschleunigen?), dann ist dieses Webinar genau das Richtige für Sie. Schließen Sie sich dem Embarcadero-Team an und erfahren Sie aus erster Hand, wie Sie mit den in der RAD Studio Architect Edition enthaltenen zusätzlichen Funktionen schneller entwickeln können.
In dieser Sitzung werden Stephen Ball, Mary Kelly und Alex Ruiz die hochproduktiven Tools und zusätzlichen Funktionen der Architect Edition diskutieren und vorführen, gefolgt von offenen Fragen und Antworten. Die Session deckt mehrere Phasen des Entwicklungszyklus ab und zeigt, wie Aqua Data Studio, Sencha Architect, Ranorex, RAD Server und FireMonkey Datenermittlung, App-Tests, Middle-Tier-, Mobile- und Web-Entwicklung ermöglichen.
Традиционно в нашей стране покупки в конце года делаются для подарка с прицелом на будущее. Компания Embarcadero решила сама сделать подарки покупателям новых лицензий наших продуктов.
До конца этого года покупатели каждой новой лицензии RAD Studio или Delphi Enterprise или Architect бесплатно получат в придачу один из трех наборов для WEB-разработки от наших партнеров: IntraWeb, uniGUI или TMS Web Core по выбору покупателя.
RAD Studio и Delphi — практически единственный инструмент визуальной разработки кроссплатформенных приложений на базе единого кода для настольных, серверных и мобильных платформ. И вести действительно быструю разработку в визуальном режиме проектирования взаимодействия с пользователем. Но не следует забывать, что вы точно также можете пользоваться этими инструментами для создания полнофункциональных, гибких WEB-приложений в современном стиле привлекательно выглядящих страниц, реализованных на HTML, CSS, Javascript и т.п, чтобы пользователи работали в любом интернет-браузере. Нужно только взять соответствующий пакет и набор компонент от одного из наших технических партнеров или партнеров-производителей.
Главное достоинство таких решений — в том, что все привычные и наработанные методы работы с данными или обработки событий, вся логика работы алгоритмов (если вы, конечно, не «утопили» ее глубоко вместе с графическим представлением UI) могут остаться теми же самыми — в тех же самых модулях, а новые компоненты дадут вам возможность проектировать WEB-страницы и переводить эти дизайны в реализацию на HTML, CSS, javascript. Таким образом, разработчики Delphi могут реализовывать всю палитру ПО, необходимого для WEB-решений — fullstack разработка — или только одну из сторон таких распределенных систем — сервера, включая промежуточного уровня, или, наоборот, исключительно клиентские приложения.
Да, компания Embarcadero предлагает и другие высококлассные инструменты WEB-разработки, например, Sencha Ext JS — но эти инструменты требуют больших усилий на изучение и освоение, они сильно отличаются от RAD как по методикам, так и технологиям. В то же самое время, перечисленные WEB-пакеты уже содержат большинство шаблонов и стилей, которые применяются сегодня в WEB-разработке. Гибкость такого подхода позволяет получать приложения неотличимые от разработанных на популярных стандартных WEB-фреймворках.
В список партнерских пакетов WEB-разработки вошли известные и популярные продукты:
Каждый из них обладает большим числом преимуществ, каждый чем-то лучше другого. Наши коллеги провели сравнение возможностей и областей применения этих и нескольких других пакетов. Запись вебинара с обзором и сравнением пакетов WEB-разработки вы найдете по ссылке https://youtu.be/ks4Q_W5kwn4
Итак, только до конца декабря при покупке каждой новой лицензии RAD Studio / Delphi редакций Enterprise или Architect вы можете бесплатно получить один из перечисленных пакетов WEB-разработки. При заказе вы должны только указать, какой именно пакет вы выбрали.
Давайте в следующем году дадим пользователям еще более красивые, удобные и современные приложения!
Сегодняшнее распространение инструментов для улучшения разработки программного обеспечения — повод для празднования. Многие замечательные люди ежедневно работают над созданием и распространением утилит, плагинов и IDE, которые упрощают нашу разработку! Однако оборотная сторона медали распространения описывает постоянную борьбу за определение лучшего инструмента как для текущей работы, так и для вашего будущего. Если вы когда-либо сталкивались с восемью фреймворками, конкурирующими за ваше внимание в одной и той же области программного обеспечения, и чувствовали паралич выбора, вы понимаете, насколько сложно может быть сделать этот важный личный и деловой выбор. Имея это в виду, мы приступаем к миссии по публикации серии официальных документов, в которых RAD Studio сравнивается и противопоставляется другим основным фреймворкам для разработки приложений, чтобы найти лучший долгосрочный выбор.. Наша аудитория — это как разработчики, которые должны хорошо понимать свой выбор, так и лица, принимающие решения, и бизнес-стратеги, ищущие структуру, которая будет поддерживать своевременную доставку, расширение в будущем и долгосрочную стабильность.
Методология
Для сравнения выбранных фреймворков будут использоваться пять тестовых приложений.
Простой калькулятор стилей Windows 10
GitHub недавний проводник
Проводник Windows
Читатель новостей Unicode RSS (с локальной базой данных)
Приложение для захвата экрана и истории
Каждое приложение включает в себя основные функции, выполняемые хорошей платформой, такие как дизайн пользовательского интерфейса, связь REST API, поддержка Unicode, поддержка баз данных и т. Д. Эти приложения будут разработаны экспертами в выбранных средах (добровольно для Delphi и заключены контракты на другие платформы) и оценены в соответствии с к основным метрикам проекта.
Метрики
Эти документы будут оценивать фреймворки с точки зрения производительности разработчиков , функциональности фреймворка , гибкости инструментов и производительности во время выполнения .
Продуктивность разработчиков — это мера усилий и кода, необходимых разработчикам для выполнения типичных задач разработки. Время, необходимое для выполнения задач разработки, влияет на доставку решения, а объем создаваемого кода влияет на усилия по обслуживанию (больше кода = больше ошибок). Производительность напрямую влияет на время вывода продукта на рынок и долгосрочные затраты на рабочую силу. Производительность будет измеряться путем сравнения начальной скорости разработки, окончательного времени сборки «быстрого запуска» и размера кода каждого тестового приложения, написанного в рассматриваемых средах.
Функциональность означает его пригодность для конкретной задачи, определяемой в этом проекте как его расширяемость и безопасность. Отличная функциональность фреймворка позволяет компаниям создавать собственные расширения на родном языке, а также защищать свой исходный код от использования. Функциональность фреймворка будет оцениваться в соответствии с его расширяемостью, сопротивляемостью декомпиляции и известными эксплойтами.
Гибкость означает широкий спектр задач, которые можно решить с помощью этого инструмента. Хотя IDE и фреймворки технически бесконечно гибки, поскольку в них можно разрабатывать все, что угодно, этот проект будет сосредоточен на кроссплатформенном использовании, сложности развертывания и требованиях, инструментах, интеграции с «магазинами приложений» и инструментах доступа к базе данных. Гибкость фреймворка позволяет разработчикам достигать своих целей с минимальным использованием других языков / инструментов и обеспечивает благодатную почву для надежного рынка сторонних инструментов . Гибкость будет качественно оценена на основе собственных возможностей каждой платформы, вариантов развертывания и предложений сторонних производителей.
Производительность во время выполнения позволяет конечным пользователям оценивать одно приложение по сравнению с другим с такими же функциями и интерфейсом. Компании, создающие приложения с превосходной безвременной производительностью, избегают неудовлетворенности клиентов за счет минимизации времени ожидания и использования ресурсов на медленных машинах. Производительность во время выполнения будет оцениваться по времени запуска, пиковому использованию памяти и среднему использованию памяти.
Дорожная карта проекта
Embarcadero планирует этот проект как итеративное сравнение RAD Studio, Delphi и C ++ Builder с другими фреймворками с целью стимулирования диалога с другими разработчиками фреймворков. Все исследования и данные будут опубликованы на GitHub для ознакомления другими. Первое сравнение проводится между библиотекой визуальных компонентов (VCL) RAD Studio и Windows Presentation Foundation (WPF) .NET с использованием тестового приложения Calculator. После этого мульти-прикладная среда FireMonkey от RAD Studio будет протестирована на Electron. Ожидайте, что будущие итерации продолжат работу над вышеупомянутыми тестовыми проектами, будут включать новые фреймворки и опираться на эти первоначальные документы, чтобы обеспечить всестороннее сравнение ведущих инструментов разработки 2020 года.
A proliferação atual de ferramentas para aprimorar o desenvolvimento de software é um motivo de comemoração. Muitas pessoas incríveis trabalham diariamente para construir e distribuir utilitários, plug-ins e IDEs que tornam nosso desenvolvimento mais fácil! No entanto, o outro lado da moeda da proliferação descreve uma luta constante para identificar a melhor ferramenta para o trabalho atual e para o futuro. Se você já se deparou com oito estruturas competindo por sua atenção no mesmo domínio de software e sentiu paralisia de escolha, você entende como pode ser difícil fazer essa escolha pessoal e empresarial significativa. Com isso em mente, estamos embarcando na missão de publicar uma série de white papers que comparam e contrastam o RAD Studio com outras estruturas importantes para o desenvolvimento de aplicativos para descobrir a melhor escolha de longo prazo. Nosso público-alvo são desenvolvedores que devem entender intimamente sua ferramenta de escolha e também tomadores de decisão e estrategistas de negócios que buscam uma estrutura que oferecerá suporte a entrega no prazo, expansão futura e estabilidade de longo prazo.
Metodologia
Cinco aplicativos de referência serão usados para comparar as estruturas selecionadas
Calculadora simples estilo Windows 10
Explorador recente do GitHub
Explorador de arquivos do Windows
Leitor de notícias Unicode RSS (com banco de dados local)
Aplicação de captura de tela e histórico
Cada aplicativo incorpora funções básicas cumpridas por um bom framework, como design de UI, comunicação REST API, suporte Unicode, suporte de banco de dados, etc. Esses aplicativos serão desenvolvidos por especialistas nos frameworks selecionados (voluntários para Delphi e contratados para outros frameworks) e avaliados de acordo com às principais métricas do projeto.
Métricas
Esses documentos avaliarão as estruturas nas áreas de produtividade do desenvolvedor , funcionalidade da estrutura , flexibilidade da ferramenta e desempenho do tempo de execução .
Produtividade do desenvolvedor é a medida de esforço e código necessário para que os desenvolvedores concluam tarefas de desenvolvimento típicas. O tempo necessário para concluir as tarefas de desenvolvimento impacta a entrega da solução e o volume de código produzido impacta os esforços de manutenção (mais código = mais bugs). A produtividade impacta diretamente o tempo de chegada do produto ao mercado e os custos de mão de obra de longo prazo. A produtividade será medida comparando a velocidade de desenvolvimento inicial, o tempo de construção de “speedrun” final e o tamanho do código de cada aplicativo de benchmark escrito nas estruturas em questão.
Funcionalidade refere-se à sua adequação a uma tarefa específica, definida neste projeto como sua extensibilidade e segurança. A excelente funcionalidade do framework permite que as empresas criem suas próprias extensões na língua nativa e também protejam seu código-fonte contra exploração. A funcionalidade do framework será avaliada de acordo com sua extensibilidade nativa, resistência à descompilação e exploits conhecidos.
Flexibilidade se refere à amplitude de tarefas que podem ser realizadas com a ferramenta. Embora IDEs e estruturas sejam tecnicamente infinitamente flexíveis porque qualquer coisa pode ser desenvolvida dentro deles, este projeto se concentrará no uso de plataforma cruzada, complexidade e requisitos de implantação, ferramentas, integração com “lojas de aplicativos” e ferramentas de acesso a banco de dados. A flexibilidade da estrutura permite que os desenvolvedores atinjam seus objetivos com a incorporação mínima de outras linguagens / ferramentas e fornece um terreno fértil para um mercado robusto de ferramentas de terceiros . A flexibilidade será avaliada qualitativamente com base nos recursos nativos de cada estrutura, opções de implantação e ofertas de terceiros.
O Runtime Performance permite que os usuários finais julguem um aplicativo em comparação com outro com os mesmos recursos e interface. As empresas que criam aplicativos com desempenho de tempo superior evitam a insatisfação do cliente, minimizando os tempos de espera e o uso de recursos em máquinas lentas. O desempenho do tempo de execução será avaliado pelo tempo de inicialização, pico de uso de memória e uso médio de memória.
Roteiro do Projeto
A Embarcadero planeja este projeto como uma comparação iterativa entre RAD Studio, Delphi e C ++ Builder com outros frameworks, com o objetivo de estimular a conversa com outros desenvolvedores de framework. Todas as pesquisas e dados serão publicados no GitHub, para outros revisarem. A primeira comparação é entre a Visual Component Library (VCL) do RAD Studio e o Windows Presentation Foundation (WPF) .NET usando o aplicativo de benchmark Calculator. Em seguida, a estrutura de multi-aplicativos FireMonkey da RAD Studio será testada em relação ao Electron. Espere futuras iterações para continuar a trabalhar por meio dos projetos de referência mencionados acima, incorporar novas estruturas e construir sobre esses documentos iniciais para fornecer uma comparação abrangente das principais ferramentas de desenvolvimento de 2020.
La proliferación actual de herramientas para mejorar el desarrollo de software es motivo de celebración. ¡Muchas personas increíbles trabajan a diario para crear y distribuir utilidades, complementos e IDE que facilitan nuestro desarrollo! Sin embargo, la otra cara de la moneda de la proliferación describe una lucha constante para identificar la mejor herramienta tanto para el trabajo actual como para su futuro. Si alguna vez se ha enfrentado a ocho marcos que compiten por su atención en el mismo dominio de software y ha sentido una parálisis de elección, comprende lo difícil que puede ser tomar esa importante decisión personal y empresarial. Con esto en mente, nos embarcamos en la misión de publicar una serie de documentos técnicos que comparan y contrastan RAD Studio con otros marcos importantes para el desarrollo de aplicaciones para eliminar la mejor opción a largo plazo.. Nuestra audiencia son tanto desarrolladores que deben comprender íntimamente su herramienta de elección como también tomadores de decisiones y estrategas comerciales que buscan un marco que respalde la entrega a tiempo, la expansión futura y la estabilidad a largo plazo.
Metodología
Se utilizarán cinco aplicaciones de referencia para comparar marcos seleccionados
Calculadora de estilo simple de Windows 10
Explorador reciente de GitHub
Explorador de archivos de Windows
Lector de noticias RSS Unicode (con base de datos local)
Aplicación de captura de pantalla e historial
Cada aplicación incorpora funciones básicas cumplidas por un buen framework como diseño de UI, comunicación REST API, soporte Unicode, soporte de base de datos, etc. Estas aplicaciones serán desarrolladas por expertos en los frameworks seleccionados (voluntarios para Delphi y contratados para otros frameworks) y evaluados según a las principales métricas del proyecto.
Métrica
Estos artículos evaluarán los marcos en las áreas de productividad del desarrollador , funcionalidad del marco , flexibilidad de herramientas y rendimiento en tiempo de ejecución .
La productividad del desarrollador es la medida de esfuerzo y código que necesitan los desarrolladores para completar las tareas de desarrollo típicas. El tiempo requerido para completar las tareas de desarrollo afecta la entrega de la solución y el volumen de código producido afecta los esfuerzos de mantenimiento (más código = más errores). La productividad impacta directamente en el tiempo de comercialización del producto y en los costos laborales a largo plazo. La productividad se medirá comparando la velocidad de desarrollo inicial, el tiempo de compilación final “speedrun” y el tamaño del código de cada aplicación de referencia escrita en los marcos temáticos.
La funcionalidad se refiere a su idoneidad para una tarea específica, definida dentro de este proyecto como su extensibilidad y seguridad. La excelente funcionalidad del marco permite a las empresas crear sus propias extensiones en el idioma nativo y también proteger su código fuente de la explotación. La funcionalidad del marco se evaluará de acuerdo con su extensibilidad nativa, resistencia a la descompilación y exploits conocidos.
La flexibilidad se refiere a la variedad de tareas que se pueden abordar con la herramienta. Si bien los IDE y los marcos son técnicamente infinitamente flexibles porque cualquier cosa podría desarrollarse dentro de ellos, este proyecto se centrará en el uso multiplataforma, la complejidad y los requisitos de implementación, las herramientas, la integración con “tiendas de aplicaciones” y las herramientas de acceso a la base de datos. La flexibilidad del marco permite a los desarrolladores alcanzar sus objetivos con una mínima incorporación de otros lenguajes / herramientas y proporciona un terreno fértil para un sólido mercado de herramientas de terceros . La flexibilidad se evaluará cualitativamente en función de las capacidades nativas de cada marco, las opciones de implementación y las ofertas de terceros.
Runtime Performance hace que los usuarios finales juzguen una aplicación frente a otra con las mismas características e interfaz. Las empresas que crean aplicaciones con un rendimiento superior en tiempo de ejecución evitan la insatisfacción del cliente al minimizar los tiempos de espera y el uso de recursos en máquinas lentas. El rendimiento en tiempo de ejecución se evaluará según el tiempo de inicio, el uso máximo de memoria y el uso promedio de memoria.
Hoja de ruta del proyecto
Embarcadero planea este proyecto como una comparación iterativa entre RAD Studio, Delphi y C ++ Builder con otros marcos, con el objetivo de estimular la conversación con otros desarrolladores de marcos. Toda la investigación y los datos se publicarán en GitHub para que otros los revisen. La primera comparación es entre Visual Component Library (VCL) de RAD Studio y Windows Presentation Foundation (WPF) .NET utilizando la aplicación de referencia Calculator. Después de eso, el marco de múltiples aplicaciones FireMonkey de RAD Studio se probará con Electron. Espere que las iteraciones futuras continúen trabajando a través de los proyectos de referencia antes mencionados, incorporen nuevos marcos y se basen en estos documentos iniciales para proporcionar una comparación completa de las principales herramientas de desarrollo de 2020.
Die heutige Verbreitung von Tools zur Verbesserung der Softwareentwicklung ist ein Grund zum Feiern. Viele erstaunliche Leute arbeiten täglich daran, Dienstprogramme, Plug-Ins und IDEs zu erstellen und zu verteilen, die unsere Entwicklung erleichtern! Die andere Seite der Proliferationsmünze beschreibt jedoch einen ständigen Kampf, um das beste Werkzeug sowohl für den aktuellen Job als auch für Ihre Zukunft zu finden. Wenn Sie jemals mit acht Frameworks konfrontiert waren, die in derselben Software-Domäne um Ihre Aufmerksamkeit konkurrierten und eine Wahllähmung verspürten, wissen Sie, wie schwierig es sein kann, diese wichtige persönliche und geschäftliche Entscheidung zu treffen. Vor diesem Hintergrund beginnen wir mit der Veröffentlichung einer Reihe von White Papers, in denen RAD Studio mit anderen wichtigen Frameworks für die Anwendungsentwicklung verglichen und gegenübergestellt wird , um die beste langfristige Auswahl zu treffen. Unser Publikum sind sowohl Entwickler , die das Werkzeug ihrer Wahl genau verstehen müssen, als auch Entscheidungsträger und Geschäftsstrategen, die nach einem Rahmen suchen, der pünktliche Lieferung, zukünftige Expansion und langfristige Stabilität unterstützt.
Methodik
Fünf Benchmark-Anwendungen werden verwendet, um ausgewählte Frameworks zu vergleichen
Einfacher Rechner im Windows 10-Stil
GitHub aktueller Explorer
Windows-Datei-Explorer
Unicode RSS News Reader (mit lokaler Datenbank)
Bildschirmaufnahme- und Verlaufsanwendung
Jede Anwendung enthält grundlegende Funktionen, die von einem guten Framework wie UI-Design, REST-API-Kommunikation, Unicode-Unterstützung, Datenbankunterstützung usw. erfüllt werden. Diese Anwendungen werden von Experten in den ausgewählten Frameworks entwickelt (freiwillig für Delphi und für andere Frameworks unter Vertrag genommen) und entsprechend bewertet zu den Hauptprojektmetriken.
Metriken
Diese Papiere werden beurteilen Rahmenbedingungen in den Bereichen Entwicklerproduktivität , Framework – Funktionalität , Werkzeug Flexibilität und Laufzeitleistung .
Die Entwicklerproduktivität ist das Maß für Aufwand und Code, die Entwickler benötigen, um typische Entwicklungsaufgaben zu erledigen. Die für die Ausführung von Entwicklungsaufgaben erforderliche Zeit wirkt sich auf die Bereitstellung der Lösung aus, und das erzeugte Codevolumen wirkt sich auf den Wartungsaufwand aus (mehr Code = mehr Fehler). Die Produktivität wirkt sich direkt auf die Markteinführungszeit des Produkts und die langfristigen Arbeitskosten aus. Die Produktivität wird gemessen, indem die anfängliche Entwicklungsgeschwindigkeit, die endgültige „Speedrun“ -Erstellungszeit und die Codegröße jeder in den jeweiligen Frameworks geschriebenen Benchmark-Anwendung verglichen werden.
Funktionalität bezieht sich auf die Eignung für eine bestimmte Aufgabe, die in diesem Projekt als Erweiterbarkeit und Sicherheit definiert wird. Dank der hervorragenden Framework-Funktionalität können Unternehmen ihre eigenen Erweiterungen in der Muttersprache erstellen und ihren Quellcode vor Ausnutzung schützen. Die Framework-Funktionalität wird anhand ihrer nativen Erweiterbarkeit, Dekompilierungsresistenz und bekannten Exploits bewertet.
Flexibilität bezieht sich auf die Breite der Aufgaben, die mit dem Tool erledigt werden können. Während IDEs und Frameworks technisch unendlich flexibel sind, da alles in ihnen entwickelt werden kann, konzentriert sich dieses Projekt auf die plattformübergreifende Verwendung, die Komplexität und Anforderungen der Bereitstellung, Tools, die Integration in „App Stores“ und Tools für den Datenbankzugriff. Die Flexibilität des Frameworks ermöglicht es Entwicklern, ihre Ziele mit minimaler Einbeziehung anderer Sprachen / Tools zu erreichen, und bietet einen fruchtbaren Boden für einen robusten Tool-Markt von Drittanbietern . Die Flexibilität wird qualitativ anhand der nativen Funktionen, Bereitstellungsoptionen und Angebote von Drittanbietern bewertet.
Bei Runtime Performance beurteilen Endbenutzer eine Anwendung mit einer anderen mit denselben Funktionen und derselben Benutzeroberfläche. Unternehmen, die Anwendungen mit überlegener Laufzeitleistung erstellen, vermeiden Unzufriedenheit der Kunden, indem sie Wartezeiten und den Ressourcenverbrauch auf langsamen Maschinen minimieren. Die Laufzeitleistung wird anhand der Startzeit, der maximalen Speichernutzung und der durchschnittlichen Speichernutzung bewertet.
Projekt-Roadmap
Embarcadero plant dieses Projekt als iterativen Vergleich zwischen RAD Studio, Delphi und C ++ Builder mit anderen Frameworks, um die Konversation mit anderen Framework-Entwicklern zu fördern. Alle Forschungsergebnisse und Daten werden auf GitHub veröffentlicht, damit andere sie überprüfen können. Der erste Vergleich erfolgt zwischen der Visual Component Library (VCL) von RAD Studio und Windows Presentation Foundation (WPF) .NET unter Verwendung der Calculator-Benchmark-Anwendung. Anschließend wird das FireMonkey-Framework für mehrere Anwendungen von RAD Studio gegen Electron getestet. Erwarten Sie, dass zukünftige Iterationen die oben genannten Benchmark-Projekte weiterhin durcharbeiten, neue Frameworks einbeziehen und auf diesen ersten Papieren aufbauen, um einen umfassenden Vergleich der führenden Entwicklungswerkzeuge für 2020 zu ermöglichen.
Recently, I've been invited to Google DevFest to deliver a presentation on our experiences working with Kubernetes.
Below I talk about an online learning and streaming platform where the decision to use Kubernetes has been contested both internally and externally since the beginning of its development.
The application and its underlying infrastructure were designed to meet the needs of the regulations of several countries:
The app should be able to run on-premises, so students’ data could never leave a given country. Also, the app had to be available as a SaaS product as well.
It can be deployed as a single-tenant system where a business customer only hosts one instance serving a handful of users, but some schools could have hundreds of users.
Or it can be deployed as a multi-tenant system where the client is e.g. a government and needs to serve thousands of schools and millions of users.
The application itself was developed by multiple, geographically scattered teams, thus a Microservices architecture was justified, but both the distributed system and the underlying infrastructure seemed to be an overkill when we considered the fact that during the product's initial entry, most of its customers needed small instances.
Was Kubernetes suited for the job, or was it an overkill? Did our client really need Kubernetes?
Let’s figure it out.
(Feel free to check out the video presentation, or the extended article version below!)
Let's talk a bit about Kubernetes itself!
Kubernetes is an open-source container orchestration engine that has a vast ecosystem. If you run into any kind of problem, there's probably a library somewhere on the internet that already solves it.
But Kubernetes also has a daunting learning curve, and initially, it's pretty complex to manage. Cloud ops / infrastructure engineering is a complex and big topic in and of itself.
Kubernetes does not really mask away the complexity from you, but plunges you into deep water as it merely gives you a unified control plane to handle all those moving parts that you need to care about in the cloud.
So, if you're just starting out right now, then it's better to start with small things and not with the whole package straight away! First, deploy a VM in the cloud. Use some PaaS or FaaS solutions to play around with one of your apps. It will help you gradually build up the knowledge you need on the journey.
So you want to decide if Kubernetes is for you.
First and foremost, Kubernetes is for you if you work with containers! (It kinda speaks for itself for a container orchestration system). But you should also have more than one service or instance.
Kubernetes makes sense when you have a huge microservice architecture, or you have dedicated instances per tenant having a lot of tenants as well.
Also, your services should be stateless, and your state should be stored in databases outside of the cluster. Another selling point of Kubernetes is the fine gradient control over the network.
And, maybe the most common argument for using Kubernetes is that it provides easy scalability.
Okay, and now let's take a look at the flip side of it.
Kubernetes is not for you if you don't need scalability!
If your services rely heavily on disks, then you should think twice if you want to move to Kubernetes or not. Basically, one disk can only be attached to a single node, so all the services need to reside on that one node. Therefore you lose node auto-scaling, which is one of the biggest selling points of Kubernetes.
For similar reasons, you probably shouldn't use k8s if you don't host your infrastructure in the public cloud. When you run your app on-premises, you need to buy the hardware beforehand and you cannot just conjure machines out of thin air. So basically, you also lose node auto-scaling, unless you're willing to go hybrid cloud and bleed over some of your excess load by spinning up some machines in the public cloud.
If you have a monolithic application that serves all your customers and you need some scaling here and there, then cloud service providers can handle it for you with autoscaling groups.
There is really no need to bring in Kubernetes for that.
Let's see our Kubernetes case-study!
Maybe it's a little bit more tangible if we talk about an actual use case, where we had to go through the decision making process.
Online Learning Platform is an application that you could imagine as if you took your classroom and moved it to the internet.
You can have conference calls. You can share files as handouts, you can have a whiteboard, and you can track the progress of your students.
This project started during the first wave of the lockdowns around March, so one thing that we needed to keep in mind is that time to market was essential.
In other words: we had to do everything very, very quickly!
This product targets mostly schools around Europe, but it is now used by corporations as well.
So, we're talking about millions of users from the point we go to the market.
The product needed to run on-premise, because one of the main targets were governments.
Initially, we were provided with a proposed infrastructure where each school would have its own VM, and all the services and all the databases would reside in those VMs.
Handling that many virtual machines, properly handling rollouts to those, and monitoring all of them sounded like a nightmare to begin with. Especially if we consider the fact that we only had a couple of weeks to go live.
After studying the requirements and the proposal, it was time to call the client to..
Discuss the proposed infrastructure.
So the conversation was something like this:
"Hi guys, we would prefer to go with Kubernetes because to handle stuff at that scale, we would need a unified control plane that Kubernetes gives us."
"Yeah, sure, go for it."
And we were happy, but we still had a couple of questions:
"Could we, by any chance, host it on the public cloud?"
"Well, no, unfortunately. We are negotiating with European local governments and they tend to be squeamish about sending their data to the US. "
Okay, anyways, we can figure something out...
"But do the services need filesystem access?"
"Yes, they do."
Okay, crap! But we still needed to talk to the developers so all was not lost.
Let's call the developers!
It turned out that what we were dealing with was an usual microservice-based architecture, which consisted of a lot of services talking over HTTP and messaging queues.
Each service had its own database, and most of them stored some files in Minio.
In case you don't know it, Minio is an object storage system that implements the S3 API.
Now that we knew the fine-grained architectural layout, we gathered a few more questions:
"Okay guys, can we move all the files to Minio?"
"Yeah, sure, easy peasy."
So, we were happy again, but there was still another problem, so we had to call the hosting providers:
"Hi guys, do you provide hosted Kubernetes?"
"Oh well, at this scale, we can manage to do that!"
So, we were happy again, but..
Just to make sure, we wanted to run the numbers!
Our target was to be able to run 60 000 schools on the platform in the beginning, so we had to see if our plans lined up with our limitations!
We shouldn't have more than 150 000 total pods!
10 (pod/tenant) times 6000 tenants is 60 000 Pods. We're good!
We shouldn't have more than 300 000 total containers!
It's one container per pod, so we're still good.
We shouldn't have more than 100 pods per node and no more than 5 000 nodes.
Well, what we have is 60 000 pods over 100 pod per node. That's already 6 000 nodes, and that's just the initial rollout, so we're already over our 5 000 nodes limit.
Okay, well... Crap!
But, is there a solution to this?
Sure, it's federation!
We could federate our Kubernetes clusters..
..and overcome these limitations.
We have worked with federated systems before, so Kubernetes surely provides something for that, riiight? Well yeah, it does... kind of.
It's the stable Federation v1 API, which is sadly deprecated.
Then we saw that Kubernetes Federation v2 is on the way!
It was still in alpha at the time when we were dealing with this issue, but the GitHub page said it was rapidly moving towards beta release. By taking a look at the releases page we realized that it had been overdue by half a year by then.
Since we only had a short period of time to pull this off, we really didn't want to live that much on the edge.
So what could we do? We could federate by hand! But what does that mean?
In other words: what could have been gained by using KubeFed?
Having a lot of services would have meant that we needed a federated Prometheus and Logging (be it Graylog or ELK) anyway. So the two remaining aspects of the system were rollout / tenant generation, and manual intervention.
Manual intervention is tricky. To make it easy, you need a unified control plane where you can eyeball and modify anything. We could have built a custom one that gathers all information from the clusters and proxies all requests to each of them. However, that would have meant a lot of work, which we just did not have the time for. And even if we had the time to do it, we would have needed to conduct a cost/benefit analysis on it.
The main factor in the decision if you need a unified control plane for everything is scale, or in other words, the number of different control planes to handle.
The original approach would have meant 6000 different planes. That’s just way too much to handle for a small team. But if we could bring it down to 20 or so, that could be bearable. In that case, all we need is an easy mind map that leads from services to their underlying clusters. The actual route would be something like:
Service -> Tenant (K8s Namespace) -> Cluster.
The Service -> Namespace mapping is provided by Kubernetes, so we needed to figure out the Namespace -> Cluster mapping.
This mapping is also necessary to reduce the cognitive overhead and time of digging around when an outage may happen, so it needs to be easy to remember, while having to provide a more or less uniform distribution of tenants across Clusters. The most straightforward way seemed to be to base it on Geography. I’m the most familiar with Poland’s and Hungary’s Geography, so let’s take them as an example.
Poland comprises 16 voivodeships, while Hungary comprises 19 counties as main administrative divisions. Each country’s capital stands out in population, so they have enough schools to get a cluster on their own. Thus it only makes sense to create clusters for each division plus the capital. That gives us 17 or 20 clusters.
So if we get back to our original 60 000 pods, and 100 pod / tenant limitation, we can see that 2 clusters are enough to host them all, but that leaves us no room for either scaling or later expansions. If we spread them across 17 clusters - in the case of Poland for example - that means we have around 3.500 pods / cluster and 350 nodes, which is still manageable.
This could be done in a similar fashion for any European country, but still needs some architecting when setting up the actual infrastructure. And when KubeFed becomes available (and somewhat battle tested) we can easily join these clusters into one single federated cluster.
Great, we have solved the problem of control planes for manual intervention. The only thing left was handling rollouts..
As I mentioned before, several developer teams had been working on the services themselves, and each of them already had their own Gitlab repos and CIs. They already built their own Docker images, so we simply needed a place to gather them all, and roll them out to Kubernetes. So we created a GitOps repo where we stored the helm charts and set up a GitLab CI to build the actual releases, then deploy them.
From here on, it takes a simple loop over the clusters to update the services when necessary.
The other thing we needed to solve was tenant generation.
It was easy as well, because we just needed to create a CLI tool which could be set up by providing the school's name, and its county or state.
That's going to designate its target cluster, and then push it to our Gitops repo, and that basically triggers the same rollout as new versions.
We were almost good to go, but there was still one problem: on-premises.
Although our hosting providers turned into some kind of public cloud (or something we can think of as public clouds), we were also targeting companies who want to educate their employees.
Huge corporations - like a Bank - are just as squeamish about sending their data out to the public internet as governments, if not more..
So we needed to figure out a way to host this on servers within vaults completely separated from the public internet.
In this case, we had two main modes of operation.
One is when a company just wanted a boxed product and they didn't really care about scaling it.
And the other one was where they expected it to be scaled, but they were prepared to handle this.
In the second case, it was kind of a bring your own database scenario, so you could set up the system in a way that we were going to connect to your database.
And in the other case, what we could do is to package everything — including databases — in one VM, in one Kubernetes cluster. But! I just wrote above that you probably shouldn't use disks and shouldn't have databases within your cluster, right?
However, in that case, we already had a working infrastructure.
Kubernetes provided us with infrastructure as code already, so it only made sense to use that as a packaging tool as well, and use Kubespray to just spray it to our target servers.
It wasn't a problem to have disks and DBs within our cluster because the target were companies that didn't want to scale it anyway.
So it's not about scaling. It is mostly about packaging!
Previously I told you, that you probably don't want to do this on-premises, and this is still right! If that's your main target, then you probably shouldn't go with Kubernetes.
However, as our main target was somewhat of a public cloud, it wouldn't have made sense to just recreate the whole thing - basically create a new product in a sense - for these kinds of servers.
So as it is kind of a spin-off, it made sense here as well as a packaging solution.
Basically, I've just given you a bullet point list to help you determine whether Kubernetes is for you or not, and then I just tore it apart and threw it into a basket.
And the reason for this is - as I also mentioned:
Cloud ops is difficult!
There aren't really one-size-fits-all solutions, so basing your decision on checklists you see on the internet is definitely not a good idea.
We've seen that a lot of times where companies adopt Kubernetes because it seems to fit, but when they actually start working with it, it turns out to be an overkill.
If you want to save yourself about a year or two of headache, it's a lot better to first ask an expert, and just spend a couple of hours or days going through your use cases, discussing those and save yourself that year of headache.
In case you're thinking about adopting Kubernetes, or getting the most out of it, don't hesitate to reach out to us at info@risingstack.com, or by using the contact form below!
Seguindo a tradição, passado o Conference (virtual) deste ano de 2020, levamos ao ar na semana anterior as palestras melhores avaliadas pelo público, além do Keynote de Marco, David e Jim.
Além de uma amostra do conteúdo do Conference, de altíssimo nível por sinal, também tivemos um track extra, cobrindo:
Roadmap Update (Novembro/2020)
Estratégias Desenvolvimento WEB
Free Web Pack
IntraWeb (Atozed)
uniGUI (FMSoft)
WEB Core (TMS)
Este vídeo em particular pode ser acessado diretamente aqui:
Conforme mencionado durante a apresentação, o objetivo aqui foi apresentar estes três frameworks (IntraWeb, uniGUI e Web Core) para aqueles que ainda não se aventuraram com o desenvolvimento web.
Durante a campanha vigente, um destes frameworks, em sua versão mais completa, pode ser seu na aquisição de uma licença Enterprise ou Architect do Delphi, C++ Builder ou RAD Studio!
TwineCompile — это продвинутая система компиляции, которая использует технологию многопоточности и методы кэширования, чтобы ваш C ++ Builder компилировал в 50 раз быстрее! Этот подключаемый модуль IDE предоставляется бесплатно с активной подпиской на обновления для всех клиентов C ++ Builder и RAD Studio через диспетчер пакетов GetIt. Установите TwineCompile прямо сейчас из GetIt до вебинара , чтобы вы могли легко следить за тем, как Джонатан демонстрирует возможности TwineCompile .
Веб-семинар Deep Dive: повышение скорости компиляции C ++ Builder с помощью TwineCompile
Из командной строки RAD Studio с повышенными привилегиями вы можете установить его с помощью команды:
getitcmd -i=TwineCompile-5.2.1
Другие особенности TwineCompile:
Механизм автоматической фоновой компиляции гарантирует, что файлы компилируются так же быстро, как и сохраняются!
Хорошо настроенная, предварительно скомпилированная система обработки заголовков автоматически максимизирует одновременное использование предварительно скомпилированных заголовков между несколькими потоками!
Полная интеграция с C ++ Builder 10.4 Sydney IDE
Поддержка тем для всех тем IDE, обеспечивающая единое рабочее пространство!
Полная поддержка 32-битных и 64-битных компиляторов!
Зарегистрируйтесь сейчас для участия в углубленном веб-семинаре, чтобы максимально повысить продуктивность C ++ Builder и вывести скорость компиляции на новый уровень
TwineCompile é um sistema de compilação avançado que usa tecnologia multi-threading e técnicas de cache para tornar a compilação do C ++ Builder 50x mais rápida! Este plug-in IDE está incluído gratuitamente com uma assinatura de atualização ativa para todos os clientes C ++ Builder e RAD Studio por meio do Gerenciador de pacotes GetIt. Instale TwineCompile hoje a partir do GetIt , antes do webinar, para que você possa acompanhar facilmente enquanto Jonathan demonstra o poder do TwineCompile .
Webinar de mergulho profundo: aumente a velocidade de compilação do C ++ Builder com TwineCompile
14 de dezembro de 2020 às 11h CST / 1700 UTC [ Registrar ]
com Jonathan Benedicto da JomiTech, criador do TwineCompile
Em um prompt de comando elevado do RAD Studio, você pode instalá-lo com o comando:
getitcmd -i=TwineCompile-5.2.1
Outros recursos do TwineCompile:
O mecanismo de compilação automática de plano de fundo garante que os arquivos sejam compilados tão rápido quanto são salvos!
Um sistema de manipulação de cabeçalhos pré-compilado altamente ajustado maximiza automaticamente o uso simultâneo de cabeçalhos pré-compilados entre vários threads!
Integração perfeita com o IDE C ++ Builder 10.4 Sydney
Suporte de tema para todos os temas IDE, fornecendo um espaço de trabalho unificado!
Suporte total para compiladores de 32 e 64 bits!
Registre-se agora para o webinar aprofundado para maximizar sua produtividade C ++ Builder e aumentar suas velocidades de compilação a novos patamares.
TwineCompile es un sistema de compilación avanzado que utiliza tecnología de subprocesos múltiples y técnicas de almacenamiento en caché para hacer que su C ++ Builder compile 50 veces más rápido. Este complemento IDE se incluye gratis con una suscripción de actualización activa para todos los clientes de C ++ Builder y RAD Studio a través del GetIt Package Manager. Instale TwineCompile hoy mismo desde GetIt , antes del seminario web , para que pueda seguir fácilmente mientras Jonathan demuestra el poder de TwineCompile .
Seminario web de análisis profundo: Aumente la velocidad de compilación de C ++ Builder con TwineCompile
14 de diciembre de 2020 a las 11 a. M. CST / 1700 UTC [ Registrarse ]
con Jonathan Benedicto de JomiTech, creador de TwineCompile
Desde un símbolo del sistema de RAD Studio elevado, puede instalarlo con el comando:
getitcmd -i=TwineCompile-5.2.1
Otras características de TwineCompile:
El motor de compilación automática en segundo plano garantiza que los archivos se compilen tan rápido como se guardan.
Un sistema de manejo de encabezados altamente ajustado y precompilado maximiza automáticamente el uso simultáneo de encabezados precompilados entre múltiples subprocesos.
Integración perfecta en C ++ Builder 10.4 Sydney IDE
Soporte de temas para todos los temas IDE que brindan un espacio de trabajo unificado.
Soporte completo para compiladores de 32 y 64 bits.
Regístrese ahora para el seminario web de análisis profundo para maximizar su productividad de C ++ Builder y disparar sus velocidades de compilación a nuevas alturas.
TwineCompile ist ein fortschrittliches Kompilierungssystem, das Multithreading-Technologie und Caching-Techniken verwendet, um Ihren C++Builder 50-mal schneller zu kompilieren! Dieses IDE-Plugin ist kostenlos in einem aktiven Update-Abonnement für alle C++Builder- und RAD Studio-Kunden über den GetIt Package Manager enthalten. Installieren Sie TwineCompile noch heute vor dem Webinar von GetIt , damit Sie Jonathan problemlos die Leistung von TwineCompile vorführen können .
Deep Dive Webinar: Steigern Sie die Kompilierungsgeschwindigkeit von C++Builder mit TwineCompile
14. Dezember 2020 um 11 Uhr CST / 1700 UTC [ Register ]
mit Jonathan Benedicto von JomiTech, dem Erfinder von TwineCompile
An einer erhöhten RAD Studio-Eingabeaufforderung können Sie sie mit dem folgenden Befehl installieren:
getitcmd -i=TwineCompile-5.2.1
Weitere TwineCompile-Funktionen:
Die automatische Hintergrundkompilierungs-Engine stellt sicher, dass Dateien so schnell kompiliert werden, wie sie gespeichert werden!
Ein hoch abgestimmtes, vorkompiliertes Header-Handling-System maximiert automatisch die gleichzeitige Verwendung vorkompilierter Header zwischen mehreren Threads!
Nahtlose Integration in die C++Builder 10.4 Sydney IDE
Themenunterstützung für alle IDE-Themen, die einen einheitlichen Arbeitsbereich bieten!
Volle Unterstützung für 32-Bit- und 64-Bit-Compiler!
Melden Sie sich jetzt für das Deep-Dive-Webinar an, um Ihre C++Builder-Produktivität zu maximieren und Ihre Kompilierungsgeschwindigkeit auf ein neues Niveau zu heben.
TwineCompile führt eine automatische Hintergrundkompilierung (SORTA) in C++Builder 10.4 Sydney durch.
TwineCompile Erstellen eines Projekts in C++Builder 10.4 Sydney mithilfe des Dark Theme.
Wir haben gerade einen neuen Patch veröffentlicht, der sich auf die Verbesserung der Unterstützung von RAD Studio 10.4.1 für XCode 12, iOS 14 und macOS 11 Big Sur (Intel) konzentriert: Dies sind Betriebssysteme und Tools, die bei der Auslieferung von 10.4.1 noch nicht verfügbar waren.
Insbesondere bietet der Patch Korrekturen für ein Delphi-Ausnahmeproblem auf MacOS 11 Big Sur Intel (das auch PAServer betraf, wenn es auf dieser Plattform lief, d.h. dieser Patch enthält eine neue Version von PAServer), den SDK-Import aus Xcode 12 und das Debuggen von Anwendungen auf einem iOS 14-Gerät.
Beachten Sie, dass neue ARM-basierte Macs, auf denen MacOS 11 Big Sur läuft, MacOS-Anwendungen, die für die Intel-Plattform erstellt wurden, einschließlich solcher, die mit Delphi 10.4.1 erstellt wurden, über die Apple Rosetta 2-Kompatibilitätsschicht ausführen können.
Der Patch kann über GetIt (mit automatischer Installation über ein zeitversetztes Paket, das beim Neustart von RAD Studio angewendet wird) oder über einen direkten Download von my.embarcadero.com (in Kürze verfügbar) und manuelle Installation installiert werden. In beiden Fällen müssen Sie das Installationsprogramm von PAServer for macOS auf Ihren Mac kopieren und manuell installieren. Die Readme-Datei enthält zusätzliche Informationen und Details.
Acabamos de lanzar un nuevo parche centrado en mejorar el soporte de RAD Studio 10.4.1 para XCode 12, iOS 14 y macOS 11 Big Sur (Intel): estos son sistemas operativos y herramientas que no estaban disponibles cuando se envió 10.4.1.
Específicamente, el parche ofrece soluciones para un problema de excepción de Delphi en macOS 11 Big Sur Intel (que también afectaba a PAServer cuando se ejecutaba en esa plataforma, lo que significa que este parche incluye una nueva versión de PAServer), importación de SDK desde Xcode 12 y aplicaciones de depuración en un dispositivo iOS 14.
Tenga en cuenta que las nuevas Mac basadas en ARM que ejecutan macOS 11 Big Sur pueden ejecutar aplicaciones macOS creadas para la plataforma Intel, incluidas las creadas con Delphi 10.4.1, a través de la capa de compatibilidad Apple Rosetta 2.
El parche se puede instalar a través de GetIt (con instalación automática a través de un paquete diferido, que se aplica al reiniciar RAD Studio) o una descarga directa desde my.embarcadero.com (disponible en breve) e instalación manual. En ambos casos, tendrá que copiar el instalador de PAServer para macOS en su Mac e instalarlo manualmente. El archivo Léame incluye información y detalles adicionales.
Acabamos de lançar um novo patch focado em melhorar o suporte RAD Studio 10.4.1 para XCode 12, iOS 14 e macOS 11 Big Sur (Intel): esses são sistemas operacionais e ferramentas que não estavam disponíveis quando o 10.4.1 foi lançado.
Especificamente, o patch oferece correções para um problema de exceção Delphi no macOS 11 Big Sur Intel (que também afetava o PAServer ao ser executado nessa plataforma, o que significa que este patch inclui uma nova versão do PAServer), importação do SDK do Xcode 12 e depuração de aplicativos no um dispositivo iOS 14.
Observe que os novos Macs baseados em ARM executando macOS 11 Big Sur podem executar aplicativos macOS desenvolvidos para a plataforma Intel, incluindo aqueles criados com Delphi 10.4.1, por meio da camada de compatibilidade Apple Rosetta 2.
O patch pode ser instalado via GetIt (com instalação automática por meio de um pacote adiado, aplicado quando você reinicia o RAD Studio) ou um download direto de my.embarcadero.com (disponível em breve) e instalação manual. Em ambos os casos, você terá que copiar o instalador do PAServer for macOS para o seu Mac e instalá-lo manualmente. O arquivo leia-me inclui informações e detalhes adicionais.
Мы только что выпустили новый патч, направленный на улучшение поддержки RAD Studio 10.4.1 для XCode 12, iOS 14 и macOS 11 Big Sur (Intel): это операционные системы и инструменты, которые не были доступны при поставке 10.4.1.
В частности, патч предлагает исправления для проблемы исключения Delphi в macOS 11 Big Sur Intel (которая также влияла на PAServer при работе на этой платформе, что означает, что этот патч включает новую версию PAServer), импорт SDK из Xcode 12 и отладку приложений на устройство iOS 14.
Обратите внимание, что новые компьютеры Mac на базе ARM, работающие под управлением macOS 11 Big Sur, могут запускать приложения macOS, созданные для платформы Intel, в том числе созданные с помощью Delphi 10.4.1, через уровень совместимости Apple Rosetta 2.
Патч можно установить с помощью GetIt (с автоматической установкой через отложенный пакет, который применяется при перезапуске RAD Studio) или путем прямой загрузки с my.embarcadero.com (скоро будет доступен) и ручной установки. В обоих случаях вам придется скопировать установщик PAServer для macOS на свой Mac и установить его вручную. Файл readme содержит дополнительную информацию и подробности.
David I. hat einen fantastischen Blog-Beitrag über die Verwendung von Python4Delphi mit C++Builder. Dieser wurde von unseren früheren Webinaren zu diesem Thema inspiriert und ist das Ergebnis seiner Zusammenarbeit mit Kiriakos (AKA PyScripter), dem Maintainer von Python4Delphi, der auch einige Änderungen in der Bibliothek vorgenommen hat, um besser mit C++Builder arbeiten zu können.
Auf vielfachen Wunsch haben David und Kiriakos auch vereinbart, ein Webinar für Python für C++-Entwickler zu veranstalten, in dem Sie lernen können, Python von Ihren bevorzugten C++-Entwicklertools aus zu nutzen.
David I.tiene una publicación de blog fantástica sobre el uso de Python4Delphi con C ++ Builder . Esto se inspiró en nuestros seminarios webanteriores sobre el tema . y es el resultado de su colaboración con Kiriakos (AKA PyScripter), el mantenedor de Python4Delphi , quien también hizo algunos cambios en la biblioteca para trabajar mejor con C ++ Builder.
Por solicitud popular, David y Kiriakos también acordaron ejecutar un seminario web de Python para desarrolladores de C ++ donde puede aprender a aprovechar Python desde sus herramientas de desarrollo de C ++ favoritas.
David I. tem uma postagem de blog fantástica sobre como usar Python4Delphi com o C ++ Builder . Isso foi inspirado em nossos webinarsanteriores sobre o assunto . e é o resultado de sua colaboração com Kiriakos (também conhecido como PyScripter), o mantenedor do Python4Delphi , que também fez algumas alterações na biblioteca para funcionar melhor com o C ++ Builder.
Por solicitação popular, David e Kiriakos também concordaram em executar um webinar de Python para desenvolvedores C ++, onde você pode aprender a aproveitar o Python de suas ferramentas favoritas de desenvolvedor C ++.
У Дэвида И. есть фантастическая запись в блоге об использовании Python4Delphi с C ++ Builder . Это было вдохновлено нашими предыдущими вебинарами по этой теме . и является результатом его сотрудничества с Кириакосом (AKA PyScripter), сопровождающим Python4Delphi , который также внес некоторые изменения в библиотеку для улучшения работы с C ++ Builder.
По многочисленным просьбам Дэвид и Кириакос также согласились провести веб-семинар по Python для разработчиков на C ++, на котором вы сможете научиться использовать Python с помощью своих любимых инструментов разработчика C ++.
Es ist bereits November – die Zeit vergeht wie im Flug. Trotz der globalen Pandemie stürmen wir weiter voran. Da sich Entwickler immer mehr daran gewöhnen, von zu Hause aus zu arbeiten (und manche lieben es), werden immer mehr Projekte aufgenommen, was aufregend ist. Ich bin besonders begeistert, dass es auf GitHub immer mehr öffentliche Delphi-Projekte gibt und die damit verbundenen Diskussionen auf beliebten Plattformen wie Stack Overflow und Reddit zunehmen, wenn auch nicht so schnell. Ich weiß, dass wir unsere eigenen, proprietäreren haben, die großartig sind, aber je mehr Delphi wir dort veröffentlichen, desto besser!
Meine Themen waren in letzter Zeit einfach:
Entwickeln Sie brillanten Code.
Teile es.
Inspirieren Sie bestehende und neue Delphi-Entwickler.
Haben Sie in letzter Zeit eines dieser Kontrollkästchen aktiviert? Es gibt ungefähr 8K Delphi-Projekte auf GitHub . Über 500.000 Entwickler kennen Delphi, und mindestens 200 bis 300.000 entwickeln sich aktiv damit. Du machst die Mathematik!
Wir haben kürzlich Open Source von Bold for Delphi veröffentlicht , um einen Beitrag zur Community zu leisten, und eine fantastische Gruppe von Entwicklern arbeitet derzeit an dem Projekt. Embarcadero fängt gerade erst mit Open Source-Projekten an. Halten Sie also Ausschau nach mehr!
Hier sind einige Höhepunkte unserer aktuellen Bemühungen:
10.4.1 Qualitätsfreigabe macht 10.4 noch besser
10.4 war eine wesentliche Version mit über 1.000 Verbesserungen und Qualitätskorrekturen. Viele seiner Funktionen wurden sowohl von großen Unternehmen als auch von einzelnen Entwicklern begrüßt. 10.4.1 ist eine stabile und robuste Version mit einer schnelleren Implementierung von Delphi Code Insight basierend auf dem Language Server Protocol, VCL-Stilen, die sich hervorragend für High-DPI- und 4K-Monitore eignen, erweiterten Apple-Plattformen und API-Abdeckung. Es enthält auch einen stark verbesserten GetIt-Paketmanager und viele andere Funktionen. 10.4.1 fügt über 800 Qualitätsverbesserungen hinzu, darunter mehr als 500 für Probleme, die auf der Quality Portal-Website öffentlich gemeldet werden. 10.4.2 Beta wird auch bald für Kunden mit Update-Abonnement gestartet. Es ist eine gute Zeit für ein Upgrade!
DelphiCon war unglaublich!
Mit fast 4000 registrierten Personen war dies unsere größte jährliche Delphi-Veranstaltung. Wenn Sie die Live-Sitzungen verpasst haben, registrieren Sie sich jetzt , um die Wiederholungen abzufangen . In diesem Jahr haben wir viele Expertengremien aufgenommen, darunter einige der führenden Architekten von Delphi , ein großer Erfolg für alle! Nehmen Sie an Präsentationen von Vordenkern teil und genießen Sie einige der großartigen Vergünstigungen und Rabatte . Eines unserer Ziele mit DelphiCon war es, das Format im Vergleich zu früheren CodeRage-Ereignissen zu vereinfachen, und wir hoffen, dass es Ihnen gefallen hat. Wir sind immer auf der Suche nach Verbesserungsmöglichkeiten und Ihr Feedback ist wertvoll. Auf vielfachen Wunsch ist für den Frühling ein spezielles C ++ – Event in Arbeit.
Aktualisierte RAD Studio Roadmap
Das Produktmanagement hat kürzlich die RAD Studio-Roadmap für November 2020 aktualisiert . Es ist immer toll zu sehen, was die Pläne für die Zukunft sind, und den Kommentar des Produktmanagements zu diesen Plänen zu lesen . Diese Roadmaps basieren auf der Richtung der Branche und dem Feedback, das wir von Ihnen, unseren Benutzern, erhalten. Überprüfen Sie die Roadmap, hinterlassen Sie Ihr Feedback und reichen Sie Funktionsanfragen im Quality Portal ein .
Wir sind uns bewusst, dass die Budgets heutzutage knapp sind und wir möchten die Arbeit mit den neuesten Versionen wirtschaftlicher gestalten. Wir haben eine Reihe attraktiver globaler Werbeaktionen für unterschiedliche Anforderungen.
Wir haben die Architect-SKU um viele Mehrwertprodukte erweitert, von Ext JS- und Ranorex-Lizenzen bis hin zur erweiterten Verwendung von InterBase und RAD Server. Wenn Sie nach dem besten Dollar-Angebot suchen, ist das eindeutig ein großartiges Angebot.
Wenn Sie Web-Apps mit Delphi erstellen möchten, bieten wir drei großartige Optionen: IntraWeb , TMS Web oder UniGui . Dies sind bewährte Lösungen, die mit RAD Studio hervorragend funktionieren. Sie alle haben Nuancen, die sie für verschiedene Anwendungsfälle gut machen, sind aber jeweils leistungsstark und zuverlässig.
Die Nachfrage nach unseren Upgrade-Preisen ist so hoch wie nie zuvor. Um diese Nachfrage zu befriedigen und bei strengeren Budgetbeschränkungen zu helfen, bieten wir 35% Rabatt für alle Ausgaben. Der Rabatt und das Web Pack sind exklusiv und nicht zusammen erhältlich.
Wir arbeiten sehr eng mit unseren regionalen Partnern zusammen und haben lokale Werbeaktionen maßgeschneidert. Ich empfehle Ihnen daher , mit Wiederverkäufern oder unseren Vertretern des Embarcadero- Kontoszu sprechen, um herauszufinden, was für Sie am besten ist.
KOSTENLOSE Upgrade-Bürozeiten!
Wir bringen kostenlose Upgrade-Beratungen zurück. Für eine begrenzte Zeit können Sie mit unseren Softwareberatern sprechen, um Ihren Upgrade-Plan zu entwickeln. Wie lange wird es dauern? Welche Ressourcen und Tools benötigen Sie? Was ist der Einfluss auf die Architektur? Was ist, wenn Sie einen Webclient oder einen mobilen Client hinzufügen möchten? Gibt es Dritte, die Ihnen helfen? Dies sind alles gültige Fragen, die Sie kostenlos diskutieren können. Klicken Sie auf den Link unten, um Ihren Termin noch heute zu vereinbaren.
Coole kostenlose Debugger-IDE-Erweiterung für Kunden mit Update-Abonnement
Ich freue mich, Getit eine weitere coole Komponente für alle Upgrade-Abonnement-Kunden vorstellen zu können: Parallel Debugger .
Viele Apps fügen heute Multithreading hinzu: Die Zeiten von Single-Threaded-Apps, in denen alles auf dem Haupt-UI-Thread erledigt wird, sind vorbei. Möglicherweise verwenden Sie Threads über TThread oder über die neue Bibliothek für parallele Programmierung , die beide bei unseren Entwicklern sehr beliebt sind. Obwohl Parallelität und Threading immer wichtiger werden, ist die Debugger-Schnittstelle in vielen IDEs – nicht nur in unseren, sondern in allen IDEs – weitgehend auf Single-Threaded-Programmierung ausgerichtet: Zum Beispiel wird immer nur der Aufrufstapel eines Threads angezeigt.
Dieses neue Parnassus Parallel Debugger IDE-Plugin zielt darauf ab, Ihre App umfassend zu verstehen, wenn mehrere Dinge gleichzeitig ausgeführt werden. Sie können alle parallelen Ausführungen untersuchen, alle Threads auf einmal und ihre Interaktionen anzeigen, mit verbessertem Editor-Markup, Verbesserungen beim Ausführen und Steppen eines Prozesses, Verbesserungen bei Haltepunkten und mehr. Wir glauben, dass es Funktionen hinzufügt, die in keiner anderen IDE zu sehen sind. Selbst wenn Sie nicht mehrere Threads verwenden, kann ein Teil der erweiterten Benutzeroberfläche Ihre Debugging-Produktivität oder Ihr Verständnis für die Ausführung Ihrer App erheblich verbessern!
Der Parallel Debugger stammt aus derselben Quelle wie Bookmarks und Navigator, zwei Plugins in GetIt, die eine verbesserte Navigation und andere Funktionen in der IDE hinzufügen. Sie gehören stetig zu den beliebtesten Downloads auf GetIt. Wir hoffen, dass das neue Plugin ähnlich bewertet wird!
Sie können sich bald auf weitere Details zum Parallel Debugger freuen, z. B. eine detaillierte Beschreibung der Funktionen, die Sie beim Debuggen in Delphi und C ++ Builder haben, Screenshots und mehr. Wir können es kaum erwarten, es Ihnen zu zeigen!
Viel schnellere C ++ – Compiler!
Wir hören von unseren Kunden, dass die Geschwindigkeit der C ++ – Kompilierung, insbesondere mit Clang, etwas ist, das wir wirklich beschleunigen sollen. Nun, wir haben etwas Besonderes für Sie: TwineCompile, ein frei verfügbares Plugin in GetIt (kostenlos für alle SKUs, einschließlich Pro!), Das das Kompilieren von C ++ – Codebasen um das bis zu 50-fache beschleunigen kann.
Das ist kein Tippfehler – TwineCompile skaliert nicht nur die Geschwindigkeit mit der Anzahl der verfügbaren Kerne, sondern verfügt auch über einige beeindruckende Techniken, um andere Optimierungen vorzunehmen. Es kann die Zeit, die zum Erstellen einer C ++ – App benötigt wird, drastisch reduzieren. Wir lieben es und empfehlen es wirklich. Es ist heute in GetIt kostenlos verfügbar, auch für Profis (nicht nur für Unternehmen und Architekten).
Kyle Wheeler und David Millington arbeiten an einem C ++ Builder-spezifischen Update, in dem sie einige Neuigkeiten zu neuen Bibliotheken und anderen C ++ – spezifischen Updates veröffentlichen. Bleib dran!
Delphi und Python arbeiten zusammen
Das ist besonders aufregend für mich, weil es ein weiteres Community-Projekt ist. Der Entwickler von PyScripter hat eine Reihe großartiger Integrationen zwischen Delphi und Python aus RAD Studio erstellt. Diese wurden in den letzten Monaten in zwei Webinaren mit über 4.000 Teilnehmern vorgestellt. Es ist eine großartige Gelegenheit, die Tools für Delphi-Entwickler zu erweitern und Python-Entwicklern RAD Studio vorzustellen. Die Python-Entwickler-Community wächst sehr schnell und für viele ist dies ihre erste Computersprache (wahrscheinlich hätte es Delphi sein sollen). RAD Studio ist ein natürliches und einfaches Tool für diese Entwickler, um leistungsstarke native Apps zu erstellen.
Ya es noviembre, el tiempo vuela estos días. A pesar de la pandemia mundial, seguimos avanzando. A medida que los desarrolladores se están acostumbrando a trabajar desde casa (y a algunos les encanta), vemos más proyectos retomando, lo cual es emocionante. Estoy especialmente emocionado de que haya cada vez más proyectos públicos de Delphi en GitHub, y las discusiones relacionadas en plataformas populares, como Stack Overflow y Reddit, estén creciendo, aunque no tan rápido. Sé que tenemos nuestros propios más propietarios, que son geniales, pero cuanto más Delphi publiquemos, ¡mejor!
Mis temas últimamente han sido simples:
Desarrolle un código brillante.
Compártelo .
Inspire a los desarrolladores de Delphi nuevos y existentes.
¿Ha marcado alguna de estas casillas últimamente? Hay alrededor de 8K proyectos Delphi en GitHub . Más de 500.000 desarrolladores conocen Delphi, y al menos 200-300K están desarrollando activamente con él. ¡Haz las matemáticas!
Recientemente lanzamos Bold for Delphi open source para ayudar a contribuir a la comunidad, y un grupo fantástico de desarrolladores ahora está trabajando en el proyecto. Embarcadero recién está comenzando con proyectos de código abierto, ¡así que esté atento a más!
Estos son algunos aspectos destacados de nuestros esfuerzos actuales:
10.4.1 El lanzamiento de calidad hace que 10.4 sea aún mejor
10.4 fue una versión esencial con más de 1,000 mejoras y correcciones de calidad. Muchas de sus características fueron bien recibidas tanto por grandes empresas como por desarrolladores individuales. 10.4.1 es una versión estable y robusta , que presenta una implementación más rápida de Delphi Code Insight basada en el protocolo de servidor de idiomas, estilos VCL que funcionan muy bien con monitores High-DPI y 4K, plataformas Apple extendidas y cobertura API. También incluye un administrador de paquetes GetIt muy mejorado y muchas otras características. 10.4.1 agrega más de 800 mejoras de calidad, incluidas más de 500 para problemas informados públicamente en el sitio Quality Portal. 10.4.2 Beta también comenzará pronto para los clientes de suscripción de actualización. ¡Es un buen momento para actualizar!
¡DelphiCon fue increíble!
Con casi 4000 personas registradas, este fue nuestro mayor evento anual de Delphi. Si se perdió las sesiones en vivo, regístrese ahora para ver las repeticiones . Este año incluimos muchos paneles de expertos, incluidos algunos de los arquitectos principalesde Delphi , ¡un gran éxito para todos! Únase y disfrute de las presentaciones de líderes de opinión y vea algunos de los grandes beneficios y descuentos disponibles . Uno de nuestros objetivos con DelphiCon era simplificar el formato en comparación con eventos anteriores de CodeRage y esperamos que lo haya disfrutado. Siempre estamos buscando formas de mejorar y sus comentarios son valiosos. Por demanda popular, se está preparando un evento dedicado de C ++ para la primavera.
Hoja de ruta de RAD Studio actualizada
La gestión de productos actualizó recientemente la hoja de ruta de RAD Studio para noviembre de 2020 . Siempre es bueno ver cuáles son los planes para el futuro y leer los comentarios de la gestión de productos sobre estos planes. Estas hojas de ruta se basan en la dirección de la industria y los comentarios que recibimos de ustedes, nuestros usuarios. Consulte la hoja de ruta, deje sus comentarios y presente solicitudes de funciones en Quality Portal .
Sabemos que los presupuestos son ajustados en estos días y queremos que trabajar con las últimas versiones sea más económico. Tenemos una serie de atractivas promociones globales para satisfacer diferentes necesidades.
Hemos mejorado el SKU de Architect para incluir una gran cantidad de productos de valor agregado, desde licencias Ext JS y Ranorex hasta el uso ampliado de InterBase y RAD Server. Si está buscando la mejor oferta en dólares, esa es claramente una excelente.
Si desea crear aplicaciones web con Delphi, ofrecemos tres excelentes opciones: IntraWeb , TMS Web o UniGui . Estas son soluciones comprobadas que funcionan muy bien con RAD Studio. Todos tienen matices que los hacen buenos para diferentes casos de uso, pero cada uno es poderoso y confiable.
La demanda de nuestros precios de actualización está en su punto más alto. Para satisfacer esta demanda y ayudar con las restricciones presupuestarias más estrictas, ofrecemos un 35% de descuento en todas las ediciones. El descuento y el Web Pack son exclusivos no están disponibles juntos.
Colaboramos muy de cerca con nuestros socios regionales y hemos adaptado promociones locales , por lo que le animo a hablar con los revendedores o con nuestros representantes de cuentas de Embarcaderopara ver qué es lo mejor para usted.
¡Actualice GRATIS el horario de oficina!
Estamos recuperando consultas de actualización gratuitas. Por tiempo limitado, puede hablar con nuestros consultores de software para desarrollar su plan de actualización. ¿Cuánto tiempo tardará? ¿Qué recursos y herramientas necesitas utilizar? ¿Cuál es el impacto en la arquitectura? ¿Qué sucede si desea agregar un cliente web o un cliente móvil? ¿Hay terceros disponibles para ayudarlo? Todas estas son preguntas válidas que puede discutir sin costo alguno. Haga clic en el enlace a continuación para programar su cita hoy.
Cool Free Debugger IDE Expansión para clientes con suscripción de actualización
Estoy emocionado de presentar otro componente interesante que llegará muy pronto a Getit para todos los clientes de Suscripciones de actualización: Parallel Debugger .
Hoy en día, muchas aplicaciones agregan subprocesos múltiples: los días de las aplicaciones de un solo subproceso donde todo se hace en el subproceso principal de la interfaz de usuario han terminado. Es posible que esté utilizando subprocesos a través de TThread , o mediante la nueva biblioteca de programación paralela , los cuales son muy populares entre nuestros desarrolladores. Sin embargo, a pesar de que el paralelismo y el subproceso son cada vez más importantes, la interfaz del depurador en muchos IDE, no solo el nuestro, sino todos los IDE, todavía está orientada en gran medida a la programación de un solo subproceso: por ejemplo, ver solo la pila de llamadas de un subproceso a la vez.
Este nuevo plugin Parnassus Parallel Debugger IDE está dirigido directamente a comprender su aplicación de manera integral cuando está haciendo varias cosas a la vez. Puede examinar toda la ejecución paralela, ver todos los hilos a la vez y sus interacciones, con el marcado mejorado del editor, mejoras en la ejecución y paso de un proceso, mejoras en los puntos de interrupción y más. Creemos que agrega características que no se ven en ningún otro IDE. Incluso si no utiliza varios subprocesos, algunas de las interfaces de usuario mejoradas pueden mejorar su productividad de depuración o la comprensión de la ejecución de su aplicación.
Parallel Debugger proviene de la misma fuente que Bookmarks y Navigator, dos complementos en GetIt que agregan navegación mejorada y otras características dentro del IDE. Se encuentran constantemente entre las descargas más populares de GetIt. ¡Esperamos que el nuevo complemento sea valorado de manera similar!
Puede esperar más detalles sobre el depurador paralelo pronto, como una descripción detallada de las funciones que tendrá al depurar en Delphi y C ++ Builder, capturas de pantalla y más. ¡No podemos esperar para mostrarte!
¡Compiladores de C ++ mucho más rápidos!
Escuchamos a nuestros clientes que la velocidad de la compilación de C ++, especialmente con Clang, es algo que realmente le gustaría que aceleramos. Bueno, tenemos algo especial para usted: TwineCompile, un complemento disponible gratuitamente en GetIt (¡que es gratis para todos los SKU, incluido Pro!) Que puede acelerar la compilación de bases de código C ++ hasta 50 veces.
Eso no es un error tipográfico, además de escalar la aceleración con la cantidad de núcleos disponibles, TwineCompile tiene algunas técnicas impresionantes para realizar otras optimizaciones. Realmente puede reducir drásticamente el tiempo que lleva crear una aplicación C ++. Nos encanta y realmente lo recomendamos. Está disponible en GetIt hoy, también para profesionales (no solo para empresas y arquitectos), de forma gratuita.
Kyle Wheeler y David Millington están trabajando en una actualización específica de C ++ Builder donde compartirán algunas noticias sobre nuevas bibliotecas y otras actualizaciones específicas de C ++. ¡Estén atentos para eso!
Delphi y Python trabajando juntos
Esto es particularmente emocionante para mí porque es otro proyecto impulsado por la comunidad. El creador de PyScripter creó una serie de integraciones increíbles entre Delphi y Python de RAD Studio. Estos se presentaron en dos seminarios web en los últimos meses con más de 4.000 participantes. Es una gran oportunidad para expandir las herramientas disponibles para los desarrolladores de Delphi y presentar a los desarrolladores de Python RAD Studio. La comunidad de desarrolladores de Python está creciendo muy rápido y, para muchos, este es su primer lenguaje informático (probablemente debería haber sido Delphi). RAD Studio es una herramienta fácil y natural para que estos desarrolladores creen poderosas aplicaciones nativas.
Já é novembro – o tempo voa atualmente. Apesar da pandemia global, continuamos avançando. Conforme os desenvolvedores estão se acostumando a trabalhar em casa (e alguns adoram), vemos mais projetos crescendo, o que é empolgante. Estou especialmente entusiasmado com o fato de haver cada vez mais projetos Delphi públicos no GitHub, e as discussões relacionadas em plataformas populares, como Stack Overflow e Reddit, estão crescendo, embora não tão rapidamente. Eu sei que temos os nossos próprios mais proprietários, que são ótimos, mas quanto mais Delphi colocarmos lá, melhor!
Meus temas ultimamente têm sido simples:
Desenvolva código brilhante.
Compartilhe .
Inspire desenvolvedores Delphi novos e existentes.
Você verificou alguma dessas caixas recentemente? Existem cerca de 8 mil projetos Delphi no GitHub . Mais de 500 mil desenvolvedores conhecem Delphi, e pelo menos 200 a 300 mil estão desenvolvendo ativamente com ele. Você faz a matemática!
Recentemente, lançamos o código aberto Bold for Delphi para ajudar a contribuir com a comunidade, e um grupo fantástico de desenvolvedores está trabalhando agora no projeto. Embarcadero está apenas começando com projetos de código aberto, então fique atento para mais!
Aqui estão alguns destaques de nossos esforços atuais:
10.4.1 A liberação de qualidade torna a 10.4 ainda melhor
10.4 foi uma versão essencial com mais de 1.000 melhorias e correções de qualidade. Muitos de seus recursos foram bem recebidos por grandes empresas e desenvolvedores individuais. 10.4.1 é uma versão estável e robusta , apresentando uma implementação mais rápida do Delphi Code Insight baseado em Language Server Protocol, estilos VCL que funcionam bem com monitores High-DPI e 4K, plataformas Apple estendidas e cobertura de API. Também inclui um gerenciador de pacotes GetIt muito aprimorado e muitos outros recursos. 10.4.1 adiciona mais de 800 melhorias de qualidade, incluindo mais de 500 para problemas relatados publicamente no site do Portal da Qualidade. 10.4.2 Beta também será lançado em breve para clientes de Assinatura de Atualização. É um ótimo momento para atualizar!
DelphiCon foi incrível!
Com quase 4000 pessoas inscritas, este foi o nosso maior evento Delphi anual. Se você perdeu as sessões ao vivo, registre-se agora para assistir aos replays . Este ano incluímos muitos painéis de especialistas, incluindo alguns dos principais arquitetosda Delphi , um grande sucesso para todos! Participe e aproveite as apresentações de líderes de pensamento e confira algumas das grandes vantagens e descontos disponíveis . Um de nossos objetivos com o DelphiCon era simplificar o formato em comparação aos eventos anteriores do CodeRage e esperamos que você tenha gostado. Estamos sempre procurando maneiras de melhorar, e seu feedback é valioso. Por demanda popular, um evento C ++ dedicado está em andamento para a primavera.
RAD Studio RAD atualizado
O gerenciamento de produtos atualizou recentemente o roteiro do RAD Studio para novembro de 2020 . É sempre bom ver quais são os planos para o futuro e ler os comentários do gerenciamento de produtos sobre esses planos. Esses roteiros são baseados na direção da indústria e no feedback que recebemos de vocês, nossos usuários. Confira o roteiro, deixe seu feedback e registre as solicitações de recursos no Portal da Qualidade .
Percebemos que os orçamentos estão apertados atualmente e queremos tornar o trabalho com os últimos lançamentos mais econômico. Temos uma série de promoções globais atraentes para atender às diferentes necessidades.
Aprimoramos o Architect SKU para incluir muitos produtos de valor agregado, desde licenças Ext JS e Ranorex até o uso expandido de InterBase e RAD Server. Se você está procurando o melhor negócio em dinheiro, este é claramente um ótimo negócio.
Se você deseja construir aplicativos da web com Delphi, oferecemos três ótimas opções: IntraWeb , TMS Web ou UniGui . Estas são soluções comprovadas que funcionam muito bem com RAD Studio. Todos eles têm nuances que os tornam adequados para diferentes casos de uso, mas são poderosos e confiáveis.
A demanda por nossos preços de atualização está em alta. Para atender a essa demanda e ajudar com restrições de orçamento mais rígidas, estamos oferecendo 35% de desconto em todas as edições. O desconto e o Web Pack são exclusivos e não estão disponíveis juntos.
Colaboramos muito de perto com nossos parceiros regionais e personalizamos as promoções locais , por isso encorajo você a falar com os revendedores ou nossos representantes de conta da Embarcaderopara ver o que é melhor para você.
Horário comercial de atualização GRATUITO!
Estamos trazendo de volta consultas de atualização gratuitas. Por um tempo limitado, você pode falar com nossos consultores de software para desenvolver seu plano de atualização. Quanto tempo vai demorar? Quais recursos e ferramentas você precisa usar? Qual é o impacto na arquitetura? E se você quiser adicionar um cliente web ou um cliente móvel? Existem terceiros disponíveis para ajudá-lo? Todas essas são questões válidas que você pode discutir gratuitamente. Clique no link abaixo para agendar sua consulta hoje.
Cool Free Debugger IDE Expansion para atualizar clientes de assinatura
Estou animado para apresentar outro componente interessante em breve no Getit para todos os clientes de Assinatura de Atualização: Depurador Paralelo .
Muitos aplicativos hoje adicionam multithreading: os dias dos aplicativos single-threaded, em que tudo é feito no thread de IU principal, acabaram. Você pode estar usando threads por meio de TThread ou da nova Biblioteca de Programação Paralela , ambas muito populares entre nossos desenvolvedores. No entanto, apesar do paralelismo e do encadeamento serem cada vez mais importantes, a interface do depurador em muitos IDEs – não apenas no nosso, mas em todos os IDEs – ainda é amplamente orientada para a programação de encadeamento único: por exemplo, vendo apenas uma pilha de chamadas do encadeamento por vez.
Este novo plug-in Parnassus Parallel Debugger IDE tem como objetivo principal compreender seu aplicativo de forma abrangente quando ele está fazendo várias coisas ao mesmo tempo. Você pode examinar todas as execuções paralelas, ver todos os threads de uma vez e suas interações, com marcação de editor aprimorada, melhorias na execução e na etapa de um processo, melhorias nos pontos de interrupção e muito mais. Acreditamos que adiciona recursos não vistos em qualquer outro IDE. Mesmo se você não usar vários threads, algumas das interfaces de usuário aprimoradas podem realmente melhorar sua produtividade de depuração ou compreensão da execução de seu aplicativo!
O Parallel Debugger vem da mesma fonte que Bookmarks e Navigator, dois plug-ins no GetIt que adicionam navegação aprimorada e outros recursos dentro do IDE. Eles estão constantemente entre os downloads mais populares no GetIt. Esperamos que o novo plugin seja avaliado da mesma forma!
Você pode esperar mais detalhes sobre o Depurador Paralelo em breve, como uma descrição detalhada dos recursos que você terá ao depurar no Delphi e C ++ Builder, capturas de tela e muito mais. Mal podemos esperar para te mostrar!
Compiladores C ++ muito mais rápidos!
Ouvimos de nossos clientes que a velocidade da compilação C ++, especialmente com Clang, é algo que você realmente gostaria que acelerássemos. Bem, nós temos algo especial para você: TwineCompile, um plugin disponível gratuitamente no GetIt (que é gratuito para todos os SKUs, incluindo o Pro!) Que pode acelerar a compilação de bases de código C ++ em até 50x.
Isso não é um erro de digitação – além de escalar a aceleração com o número de núcleos disponíveis, TwineCompile tem algumas técnicas impressionantes para fazer outras otimizações. Ele realmente pode reduzir drasticamente o tempo que leva para construir um aplicativo C ++. Nós amamos e realmente recomendamos. Ele está disponível no GetIt hoje, também para profissionais (não apenas Enterprise e Architect), gratuitamente.
Kyle Wheeler e David Millington estão trabalhando em uma atualização específica do C ++ Builder, onde compartilharão algumas novidades sobre novas bibliotecas e outras atualizações específicas do C ++. Fique ligado nisso!
Delphi e Python trabalhando juntos
Isso é particularmente emocionante para mim porque é outro projeto voltado para a comunidade. O criador do PyScripter construiu uma série de integrações incríveis entre Delphi e Python a partir do RAD Studio. Eles foram apresentados em dois webinars nos últimos meses, com mais de 4.000 participantes. É uma grande oportunidade de expandir as ferramentas disponíveis para desenvolvedores Delphi e apresentar os desenvolvedores Python ao RAD Studio. A comunidade de desenvolvedores Python está crescendo muito rápido e, para muitos, esta é a primeira linguagem de computador (provavelmente deveria ser Delphi). O RAD Studio é uma ferramenta natural e fácil para esses desenvolvedores criarem aplicativos nativos poderosos.
Уже ноябрь — в эти дни время летит незаметно. Несмотря на глобальную пандемию, мы продолжаем идти вперед. По мере того, как разработчики все больше привыкают к работе из дома (а некоторым это нравится), мы видим, как набирает обороты больше проектов, что очень интересно. Я особенно рад, что на GitHub появляется все больше и больше общедоступных проектов Delphi, и связанные с ними обсуждения на популярных платформах, таких как Stack Overflow и Reddit, растут, хотя и не так быстро. Я знаю, что у нас есть собственные, более проприетарные, и это здорово, но чем больше Delphi мы добавим, тем лучше!
В последнее время мои темы были простыми:
Разработайте блестящий код.
Поделись этим.
Вдохновляйте существующих и новых разработчиков Delphi.
Ставили ли вы какие-нибудь из этих флажков в последнее время? На GitHubесть около 8K проектов Delphi . Около 500К + разработчиков знают Delphi, и по крайней мере 200-300К активно развиваются с его помощью. Вы делаете математику!
Недавно мы выпустили Bold для Delphi с открытым исходным кодом, чтобы помочь сообществу, и сейчас над проектом работает фантастическая группа разработчиков. Embarcadero только начинает работать с проектами с открытым исходным кодом, так что ждите большего!
Вот некоторые основные моменты наших текущих усилий:
10.4.1 Качество делает 10.4 еще лучше
10.4 был важным выпуском с более чем 1000 улучшений и исправлений качества. Многие из его функций были одобрены как крупными компаниями, так и отдельными разработчиками. 10.4.1 — это стабильная и надежная версия , отличающаяся более быстрой реализацией Delphi Code Insight на основе протокола языкового сервера, стилей VCL, которые отлично работают с мониторами с высоким разрешением и разрешением 4K, расширенными платформами Apple и покрытием API. Он также включает значительно улучшенный менеджер пакетов GetIt и многие другие функции. В 10.4.1 добавлено более 800 улучшений качества, в том числе более 500 для проблем, о которых публично сообщается на сайте Quality Portal. 10.4.2 Бета-версия также скоро стартует для пользователей подписки на обновления. Прекрасное время для обновления!
DelphiCon был потрясающим!
Это было самое большое ежегодное мероприятие Delphi, на которое было зарегистрировано почти 4000 человек. Если вы пропустили прямые трансляции, зарегистрируйтесь сейчас, чтобы смотреть повторы . В этом году мы включили множество экспертных групп, в том числе с участием некоторых ведущих архитекторов Delphi, что произвело на всех огромное впечатление! Присоединяйтесь и наслаждайтесь презентациями от лидеров мнений и ознакомьтесь с некоторыми из доступных льгот и скидок . Одной из наших целей с DelphiCon было упростить формат по сравнению с предыдущими мероприятиями CodeRage, и мы надеемся, что вам понравилось. Мы всегда ищем способы улучшить нашу работу, и ваши отзывы очень ценны. По многочисленным просьбам весной планируется провести специальное мероприятие по C ++.
Обновленная дорожная карта RAD Studio
Руководство продукта недавно обновило дорожную карту RAD Studio на ноябрь 2020 года . Всегда приятно видеть, каковы планы на будущее, и читать комментарии руководства продукта к этим планам. Эти дорожные карты основаны на направлениях развития отрасли и на отзывах, которые мы получаем от вас, наших пользователей. Ознакомьтесь с дорожной картой, оставьте свои отзывы и отправьте запросы на добавление функций на портале качества .
Мы понимаем, что в наши дни бюджеты ограничены, и хотим сделать работу с последними версиями более экономичной. У нас есть ряд привлекательных глобальных рекламных акций для удовлетворения различных потребностей.
Мы улучшили артикул Architect, включив в него множество продуктов с добавленной стоимостью, от лицензий Ext JS и Ranorex до расширенного использования InterBase и RAD Server. Если вы ищете лучшую сделку в долларах, это, безусловно, отличный вариант.
Если вы хотите создавать веб-приложения с помощью Delphi, мы предлагаем три отличных варианта: IntraWeb , TMS Web или UniGui . Это проверенные решения, которые отлично работают с RAD Studio. У всех них есть нюансы, которые делают их удобными для разных сценариев использования, но каждый из них мощный и надежный.
Спрос на наши цены на Обновление находится на рекордно высоком уровне. Чтобы удовлетворить этот спрос и помочь с более жесткими ограничениями бюджета, мы предлагаем скидку 35% на все версии. Скидка и веб-пакет являются эксклюзивными и не доступны вместе.
Мы очень тесно сотрудничаем с нашими региональными партнерами и разрабатываем местные рекламные акции , поэтому я рекомендую вам поговорить с торговыми посредниками или представителями нашей учетной записи Embarcadero,чтобы узнать, что лучше для вас.
БЕСПЛАТНЫЕ часы работы апгрейда!
Мы возвращаем бесплатные консультации по обновлению. В течение ограниченного времени вы можете поговорить с нашими консультантами по программному обеспечению, чтобы разработать план обновления. Как много времени это займет? Какие ресурсы и инструменты вам нужно использовать? Какое влияние на архитектуру? Что делать, если вы хотите добавить веб-клиент или мобильный клиент? Могут ли вам помочь третьи стороны? Это все важные вопросы, которые вы можете обсудить бесплатно. Щелкните ссылку ниже, чтобы назначить встречу сегодня.
Классное бесплатное расширение IDE отладчика для клиентов по подписке на обновления
Я рад представить еще один интересный компонент, который очень скоро появится в Getit для всех пользователей подписки на обновление: Parallel Debugger .
Многие приложения сегодня добавляют многопоточность: времена однопоточных приложений, когда все делалось в основном потоке пользовательского интерфейса, прошли. Вы можете использовать потоки через TThread или через новую библиотеку параллельного программирования , которые очень популярны среди наших разработчиков. Тем не менее, несмотря на то, что параллелизм и многопоточность становятся все более важными, интерфейс отладчика во многих IDE — не только в нашей, но и во всех IDE — по-прежнему в значительной степени ориентирован на однопоточное программирование: например, одновременное наблюдение только за стеком вызовов одного потока.
Этот новый плагин Parnassus Parallel Debugger IDE прямо нацелен на всестороннее понимание вашего приложения, когда оно выполняет несколько задач одновременно. Вы можете изучить все параллельное выполнение, увидеть все потоки сразу и их взаимодействия, с улучшенной разметкой редактора, улучшениями выполнения и пошагового выполнения процесса, улучшениями точек останова и многим другим. Мы считаем, что он добавляет функции, которых нет ни в одной другой IDE. Даже если вы не используете несколько потоков, некоторые из улучшенных пользовательских интерфейсов могут действительно улучшить вашу продуктивность отладки или понимание выполнения вашего приложения!
Parallel Debugger происходит из того же источника, что и Bookmarks и Navigator, два плагина в GetIt, которые добавляют улучшенную навигацию и другие функции в IDE. Они стабильно входят в число самых популярных загрузок на GetIt. Надеемся, новый плагин будет оценен так же!
Вы можете рассчитывать на более подробную информацию о Parallel Debugger в ближайшее время, такую как подробное описание функций, которые вы будете иметь при отладке в Delphi и C ++ Builder, снимки экрана и многое другое. Нам не терпится показать вам!
Гораздо более быстрые компиляторы C ++!
Мы слышим от наших клиентов, что скорость компиляции C ++, особенно с Clang, — это то, что вы действительно хотели бы, чтобы мы увеличили. Что ж, у нас есть кое-что особенное для вас: TwineCompile, свободно доступный плагин в GetIt (бесплатно для всех SKU, включая Pro!), Который может ускорить компиляцию кодовых баз C ++ до 50 раз.
Это не опечатка — помимо увеличения ускорения в зависимости от количества доступных ядер, TwineCompile предлагает несколько впечатляющих методов для других оптимизаций. Это действительно может значительно сократить время, необходимое для создания приложения C ++. Нам это нравится и очень рекомендуется. Сегодня он доступен в GetIt бесплатно и для профессионалов (не только для предприятий и архитекторов).
Кайл Уиллер и Дэвид Миллингтон работают над обновлением для C ++ Builder, в котором они поделятся некоторыми новостями о новых библиотеках и других обновлениях для C ++. Следите за этим!
Совместная работа Delphi и Python
Это особенно интересно для меня, потому что это еще один проект, управляемый сообществом. Создатель PyScripter создал ряд замечательных интеграций между Delphi и Python из RAD Studio. Они были представлены на двух вебинарах за последние пару месяцев с более чем 4000 участников. Это прекрасная возможность расширить инструменты, доступные разработчикам Delphi, и познакомить разработчиков Python с RAD Studio. Сообщество разработчиков Python растет очень быстро, и для многих это первый компьютерный язык (вероятно, им должен был стать Delphi). RAD Studio — это естественный и простой инструмент для этих разработчиков, позволяющий создавать мощные собственные приложения.
Kürzlich habe ich zusammen mit Mary Kelly ein Webinar geleitet, in dem ich erörtert habe, wie die ISVs von InterBase durch den Einsatz von InterBase schneller innovieren und gleichzeitig ihre Rentabilität und Kapitalrendite verbessern können. Im Anschluss an dieses Webinar wollte ich einige der besprochenen Punkte zusammenfassen.
Warum ISVs InterBase wählen?
Um es einfach auszudrücken – Vertrauen – ISVs vertrauen InterBase, um ihre Geschäftsrisiken zu senken und die hohen Kosten für Datenmanagement und Support zu reduzieren. Gleichzeitig beginnen sie ihre Entwicklung mit einem Stack der erforderlichen Kernfunktionen für Datenspeicherung, Datenschutz und Benutzersicherheit.
Realer Wert des Eigentums
Lassen Sie mich das, was ich meine, mit einer kurzen Geschichte aus meiner eigenen Erfahrung aus erster Hand erzählen. Vor einigen Jahren arbeitete ich als Direktor für einen ISV, als dieser ein .NET-Softwarehaus kaufte. Das .NET-Produkt verwendete MSSQL als Backend und unterstützte ein Viertel der Unternehmen, die von der vorhandenen unterstützten Delphi & InterBase-Anwendung unterstützt wurden. Obwohl es gehostet wurde, betrug der Aufwand für den Support des .NET-Produkts Jahr für Jahr mehr als das Fünffache der Betriebskosten im Vergleich zu InterBase nach dem Kauf.
Der größte Teil der Kosten entstand durch den Einsatz von zwei Vollzeit-DBAs, verglichen mit dem minimalen Schulungsaufwand, der für diejenigen erforderlich war, die InterBase unterstützten. Außerdem war InterBase für den Remote-Einsatz und -Support auf jedem Computertyp des Kunden geeignet. Dies trug dazu bei, das Wertangebot für den Kunden zu verbessern und die Risiken für die Geschäftsanbahnung zu verringern.
InterBase-Funktionen und -Vorteile, die geschäftlichen Nutzen bringen
Eine vollständige Liste der wichtigsten Gründe, warum sich ISVs / OEM-Partner für InterBase entscheiden, wird im Webinar erläutert
Sehr skalierbar
Von einem Computer bis zu Hunderten von Verbindungen auf einem dedizierten Server wächst es problemlos mit Ihnen!
Kleiner Speicherplatz und Platzbedarf auf der Festplatte
Unabhängig vom Gerät läuft InterBase einfach! – Von einem Handy über einen Laptop bis hin zu einem Top-End-Server ist InterBase schlank!
Erschwinglich
Das OEM / VAR-Programm bietet großartige Optionen für ISVs mit maßgeschneiderten Preisen, die auf Ihrer Nutzung basieren.
Einfache Bereitstellung
Weiter> Weiter> Fertig! Sie benötigen keinen Schulungskurs, um den InterBase-Server zum Laufen zu bringen. Dies macht ihn ideal für die kostengünstige Remote-Bereitstellung
Schnell und einfach
InterBase enthält ein Element der Selbstoptimierung. Ist aber auch sehr konfigurierbar.
Zuverlässig
Nahezu null Administrator und automatische Absturzwiederherstellung machen es ideal für die Einbettung in Remote-Bereitstellungen. Selbst wenn der Stecker aus dem Server gezogen wird, erscheint InterBase wieder und geht!
Embedded User Security
Es ist leicht zu übersehen, welchen Wert Sie erhalten (insbesondere in Kombination mit Verschlüsselung), um den Zugriff auf die Datenbank zu steuern. Die vollständige rollbasierte Benutzersicherheit erleichtert die Implementierung.
Ansichten ändern
Datenänderungsverfolgung mit der IoT-preisgekrönten Datenbank.
Die patentierte Änderungsverfolgungsmethode ermöglicht es Tausenden von verbundenen Geräten, Änderungen zu verfolgen, selbst wenn die Verbindung getrennt wird, ohne dass die zentrale Datenbank überlastet wird! – Keine Änderungsprotokolle, Tabellenauslöser oder andere Dinge, die die Datenbank aufblähen und mit der Zeit schwer zu verwalten sind!
Rich Disaster Recovery
Eine großartige Mischung aus Datenbankfunktionen zum Sichern, Wiederherstellen, Speichern usw., die durch die einzigartige Verwendung von Tabellenbereichen noch weiter verbessert wurden.
Plattformübergreifende Unterstützung
Server, Desktop UND Mobile für Windows, Linux, MacOS, iOS und Android.
Starke Verschlüsselung für Unternehmen
On-Disk-Verschlüsselung mit 256-Bit-AES-Stärke im Ruhezustand, die für den Client transparent ist, mit Verschlüsselungsoptionen auf Spaltenebene mit mehreren Verschlüsselungsschlüsseln – InterBase wird von Finanz- und Kassensystemen bis hin zu medizinischer Software weltweit für den Umgang mit finanziellen, persönlichen und medizinischen Daten verwendet Dies ermöglicht es Kunden, eine breite Palette von Industriedatenmanagement- und Sicherheitsstandards zu erreichen.
Enge RAD-Integration
Mit InterBase ist es einfach zu entwickeln, da es von RAD Studio-Kunden häufig verwendet wird.
Die offene API unterstützt auch die direkte Sicherung von Software, was reguläre Verwaltungsaufgaben sehr einfach macht.
Es wird auch häufig von umfangreichen .NET-Anwendungen verwendet und bietet .NET-, ODBC- und JDBC-Treiber.
Das ISV-Geschäftsmodell
Darüber hinaus beschäftigt sich das Webinar auch mit dem Kerngeschäft eines typischen ISVs und dem anderen Wert, den InterBase für ihn bietet.
Ein großer Teil der InterBase-Kundengeschichten, die auf der Embarcadero-Website veröffentlicht werden, beinhaltet immer wieder Kunden, die über die Entwicklung von InterBase sprechen, und selbst wenn sie ihre Entwicklungsumgebung langsamer aktualisiert haben, wie InterBase es ihnen ermöglicht hat, neue Funktionen zu veröffentlichen. Dieser offene Innovationsansatz ermöglicht es ISVs, mit größeren InterBase-Versionen, die ihre eigene F&E ergänzen, einen echten Innovationsschub für ihr Produkt zu erzielen.
Ein gutes Beispiel dafür war die Einführung der transparenten Verschlüsselung auf der Festplatte vor einigen Jahren. Diese serverseitige Funktion, die mit einer einfachen Parameteraktualisierung auf der Client-Seite aktiviert wurde, bot eine reichhaltige Funktionalität und Datenspeicherkompatibilität bei nahezu Null F&E-Ausgaben! In jüngerer Zeit haben Kunden mit Table Spaces die Möglichkeit, ihren Server für größere Einsätze zu beschleunigen, indem sie gezielt festlegen können, welcher Teil der Datenbank auf welches Laufwerk kommt.
Wie auch immer, ich hoffe, dass Sie das Webinar interessant finden, und es wäre schön, von Ihren InterBase-Geschichten zu hören!
Recientemente realicé un seminario web, junto con Mary Kelly, en el que discutí cómo los ISV de InterBase están innovando más rápido mientras mejoran la rentabilidad y el retorno de la inversión mediante el uso de InterBase. Siguiendo con ese webinar, quería resumir algunos de los puntos discutidos.
¿Por qué los ISV eligen InterBase?
En pocas palabras: Confianza: los ISV confían en InterBase para reducir sus riesgos comerciales y eliminar los grandes costos que existen en torno a la administración y el soporte de datos, mientras que inician su desarrollo con una pila de capacidades básicas requeridas en torno al almacenamiento de datos, la protección de datos y la seguridad del usuario. .
Valor real de propiedad
Permítanme compartir lo que quiero decir con una historia rápida de mi propia experiencia de primera mano. Hace algunos años, trabajé como director de un ISV cuando compró una casa de software .NET. El producto .NET utilizaba MSSQL como backend y respaldaba a una cuarta parte de las empresas que la aplicación existente de Delphi e InterBase compatible. A pesar de que estaba alojado, la sobrecarga de soporte del producto .NET era más de cinco veces superior al costo de ejecución en comparación con InterBase después de la compra, año tras año.
La mayor parte del costo provino de tener 2 administradores de bases de datos a tiempo completo, en comparación con la capacitación mínima que se requería para quienes apoyaban a InterBase. Además, InterBase era adecuado para la implementación y el soporte remotos en cualquier tipo de computadora que tuviera el cliente. Esto ayudó a mejorar la propuesta de valor para el cliente y a reducir los riesgos de conseguir negocios.
Características y beneficios de InterBase que aportan valor empresarial
Una lista completa de las principales razones por las que los socios de ISV / OEM eligen InterBase discutidas en el seminario web incluyen
Altamente escalable
Desde una máquina hasta cientos de conexiones en un servidor dedicado, ¡crece con usted fácilmente!
Pequeña memoria y espacio en disco
Independientemente del dispositivo, InterBase simplemente funciona. – desde un móvil a una computadora portátil a un servidor de gama alta, ¡InterBase es esbelto!
Asequible
El programa OEM / VAR ofrece excelentes opciones para ISV con precios personalizados según su uso.
Implementación simple
Siguiente> Siguiente> ¡Terminado! No necesita un curso de capacitación para poner en funcionamiento el servidor InterBase, lo que lo hace ideal para una implementación remota de bajo costo
Rápido y sencillo
De fábrica, InterBase incluye un elemento de autoajuste. Pero también es altamente configurable.
De confianza
Casi cero administración y recuperación automática de fallos lo hacen ideal para integrarlo en implementaciones remotas. Incluso si se desconecta el servidor, InterBase vuelve a aparecer y funciona.
Seguridad de usuario integrada
Es fácil pasar por alto el valor que obtiene (especialmente cuando se combina con el cifrado) para controlar el acceso a la base de datos. La seguridad del usuario basada en el rollo completo hace que sea fácil de implementar.
Cambiar vistas
Seguimiento de cambios de datos con la base de datos ganadora del premio IoT.
El método de seguimiento de cambios patentado está diseñado para permitir que miles de dispositivos conectados realicen un seguimiento de los cambios, incluso cuando se desconectan sin sobrecarga en la base de datos central. – ¡Sin registros de cambios, activadores de tablas o cualquier otra cosa que hinche la base de datos y se vuelva difícil de administrar con el tiempo!
Rica recuperación ante desastres
Una gran combinación de funciones de base de datos para copia de seguridad, restauración, volcado, etc., mejorada aún más con un uso exclusivo de Table Spaces.
Soporte multiplataforma
Servidor, escritorio y dispositivos móviles que abarcan Windows, Linux, macOS, iOS y Android.
Cifrado fuerte de “grado empresarial”
Cifrado en disco de 256 bits AES en reposo que es transparente para el cliente, con opciones de cifrado a nivel de columna con múltiples claves de cifrado: InterBase es utilizado por todo, desde sistemas financieros y POS hasta software médico a nivel mundial para tratar con datos financieros, personales y médicos , lo que permite a los clientes alcanzar una amplia gama de estándares de seguridad y gestión de datos de la industria.
Estrecha integración RAD
Es fácil de desarrollar con InterBase, ya que los clientes de RAD Studio lo utilizan ampliamente,
Su API abierta también admite la copia de seguridad directa desde el software, lo que simplifica las tareas de administración habituales.
También es ampliamente utilizado por numerosas aplicaciones .Net y ofrece controladores .Net, ODBC y JDBC.
El modelo de negocio de ISV
Además de lo anterior, el webinar también dedica tiempo a analizar el negocio clave de un ISV típico y el otro valor que InterBase hace posible para ellos.
Una gran parte de las historias de clientes de InterBase compartidas en el sitio web de Embarcadero incluyen una y otra vez a clientes que hablan sobre la evolución de InterBase, e incluso cuando han tardado más en actualizar su entorno de desarrollo, cómo InterBase les ha permitido lanzar nuevas funciones. Este enfoque de innovación abierta permite a los ISV obtener un impulso de innovación real para su producto con importantes lanzamientos de InterBase, complementando su propia I + D.
Un buen ejemplo de esto fue la incorporación del cifrado transparente en disco hace algunos años. Esta función del lado del servidor, habilitada con una simple actualización de parámetros en el lado del cliente, proporcionó una gran funcionalidad y cumplimiento de almacenamiento de datos con un gasto casi nulo en I + D. Más recientemente, Table Spaces ha permitido a los clientes acelerar su servidor para implementaciones más grandes al apuntar qué parte de la base de datos va en qué unidad.
De todos modos, espero que el seminario web le resulte interesante y sería fantástico conocer sus historias de InterBase .
Recentemente, conduzi um webinar, ao lado de Mary Kelly, discutindo como os ISVs do InterBase estão inovando mais rápido enquanto aumentam a lucratividade e o retorno sobre o investimento usando o InterBase. Na sequência desse webinar, eu gostaria de resumir alguns dos pontos discutidos.
Por que os ISVs escolhem o InterBase?
Simplificando – Confie – o ISV confia no InterBase para reduzir seus riscos de negócios e cortar grandes custos que existem em torno de gerenciamento e suporte de dados, enquanto inicia seu desenvolvimento com uma pilha de recursos essenciais necessários em torno de armazenamento de dados, proteção de dados e segurança do usuário .
Valor real de propriedade
Deixe-me compartilhar o que quero dizer com uma história rápida de minha própria experiência em primeira mão. Há alguns anos, trabalhei como Diretor de um ISV quando ele comprou uma casa de software .NET. O produto .NET usava MSSQL como backend e suportava um quarto dos negócios que o aplicativo Delphi & InterBase existente suportava. Embora fosse hospedado, a sobrecarga de suporte ao produto .NET era mais de 5 vezes o custo de execução em comparação com o InterBase após a compra, ano após ano.
A maior parte do custo veio de ter 2 DBA’s em tempo integral, em comparação com o treinamento mínimo necessário para aqueles que suportam o InterBase. Além disso, o InterBase era adequado para implantação remota e suporte em qualquer tipo de computador que o cliente tivesse. Isso ajudou a melhorar a proposta de valor para o cliente e a reduzir os riscos para o desembarque de negócios.
Recursos e benefícios do InterBase que agregam valor ao negócio
Uma lista completa dos principais motivos pelos quais os parceiros ISVs / OEM escolhem o InterBase discutidos no webinar inclui
Altamente escalável
De uma máquina a centenas de conexões em um servidor dedicado, ele cresce facilmente com você!
Memória pequena e pegada no disco
Independentemente do dispositivo, o InterBase apenas roda! – de um celular a um laptop e a um servidor de ponta, o InterBase é enxuto!
Acessível
O programa OEM / VAR oferece ótimas opções para ISVs com preços sob medida com base no seu uso.
Implantação Simples
Próximo> Próximo> Concluído! Você não precisa de um curso de treinamento para colocar o servidor InterBase em funcionamento, o que o torna ideal para implantação remota de baixo custo
Rápido e Simples
Fora da caixa, o InterBase tem um elemento de autoajuste incluído. Mas também é altamente configurável.
Confiável
Quase zero de administração e recuperação automática de falhas o tornam ideal para integração em implantações remotas. Mesmo que o plugue seja retirado do servidor, o InterBase volta a funcionar e vai!
Segurança do usuário incorporado
É fácil ignorar o valor que você obtém (especialmente quando combinado com a criptografia) para controlar o acesso ao banco de dados. A segurança do usuário baseada em rolagem completa torna a implementação simples.
Alterar visualizações
Acompanhamento de alterações de dados com o banco de dados vencedor do IoT Award.
O método patenteado de rastreamento de alterações foi projetado para permitir que milhares de dispositivos conectados rastreiem alterações, mesmo quando desconectados sem qualquer sobrecarga no banco de dados central! – Sem logs de alterações, gatilhos de tabela ou qualquer outra coisa que incha o banco de dados e se torna difícil de gerenciar com o tempo!
Rich Disaster Recovery
Uma grande combinação de recursos de banco de dados para backup, restauração, despejo, etc, e aprimorada ainda mais com o uso exclusivo de espaços de tabela.
Suporte multiplataforma
Servidor, desktop e móvel abrangendo Windows, Linux, macOS, iOS e Android.
Criptografia forte de “nível empresarial”
Criptografia em disco de 256 bits AES em repouso que é transparente para o cliente, com opções de criptografia em nível de coluna com várias chaves de criptografia – o InterBase é usado por tudo, desde sistemas financeiros e POS a software médico globalmente para lidar com dados financeiros, pessoais e médicos , permitindo que os clientes alcancem uma ampla gama de padrões de segurança e gerenciamento de dados do setor.
Integração RAD apertada
É fácil de desenvolver com o InterBase, sendo amplamente utilizado por clientes RAD Studio,
Sua API aberta também suporta backup direto de software, tornando as tarefas administrativas regulares muito simples.
Também é amplamente utilizado por aplicativos .Net consideráveis e oferece drivers .Net, ODBC e JDBC.
O modelo de negócios ISV
Além do que foi dito acima, o webinar também dedica tempo examinando os principais negócios de um ISV típico e o outro valor que o InterBase torna possível para eles.
Grande parte das histórias de clientes do InterBase compartilhadas no site da Embarcadero incluem, repetidamente, clientes falando sobre a evolução do InterBase, e mesmo quando eles demoraram para atualizar seu ambiente de desenvolvimento, como o InterBase permitiu que eles lançassem novos recursos. Esta abordagem de inovação aberta permite que os ISVs obtenham um verdadeiro impulso de inovação para seus produtos com os principais lançamentos do InterBase, complementando sua própria P&D.
Um bom exemplo disso foi a adição de criptografia transparente em disco há alguns anos. Este recurso do lado do servidor, habilitado com uma atualização de parâmetro simples no lado do cliente, forneceu funcionalidade rica e conformidade de armazenamento de dados com quase nenhum gasto de P&D! Mais recentemente, o Table Spaces permitiu que os clientes acelerassem seus servidores para implantações maiores, direcionando qual parte do banco de dados vai em qual unidade.
De qualquer forma, espero que você ache o webinar interessante, e seria ótimo ouvir sobre suas histórias no InterBase !
Недавно я вместе с Мэри Келли провел веб-семинар, на котором обсуждал, как InterBase ISV быстрее внедряют инновации, одновременно повышая прибыльность и окупаемость инвестиций за счет использования InterBase. Продолжая этот веб-семинар, я хотел резюмировать некоторые из обсуждаемых вопросов.
Почему независимые поставщики программного обеспечения выбирают InterBase?
Проще говоря — Доверие — ISV доверяют InterBase, чтобы снизить свои бизнес-риски и сократить большие затраты, связанные с управлением и поддержкой данных, и одновременно начать разработку со стеком необходимых основных возможностей в области хранения данных, защиты данных и безопасности пользователей. .
Реальная стоимость владения
Позвольте мне поделиться тем, что я имею в виду, коротким рассказом из моего личного опыта. Несколько лет назад я работал директором независимого поставщика программного обеспечения, когда он купил компанию по разработке программного обеспечения .NET. Продукт .NET использовал MSSQL в качестве бэкэнда и поддерживал четверть предприятий, что и существующее поддерживаемое приложение Delphi & InterBase. Несмотря на то, что он был размещен, накладные расходы на поддержку продукта .NET в 5 раз превышали затраты на запуск по сравнению с InterBase после покупки в годовом исчислении.
Наибольшая часть затрат приходилась на 2 штатных администраторов баз данных по сравнению с минимальным обучением, которое требовалось для тех, кто поддерживает InterBase. Кроме того, InterBase подходила для удаленного развертывания и поддержки на любом компьютере клиента. Это помогло улучшить ценностное предложение для клиента и снизить риски для бизнеса.
Возможности и преимущества InterBase, которые приносят пользу для бизнеса
Полный список основных причин, по которым ISV / OEM-партнеры выбирают InterBase, обсужденный на веб-семинаре, включает:
Масштабируемость
От одного компьютера до сотен подключений на выделенном сервере, он легко растет вместе с вами!
Малая память и занимаемая площадь на диске
Независимо от устройства InterBase просто запускается! — от мобильного телефона до портативного компьютера и до топового сервера — InterBase экономична!
Доступный
Программа OEM / VAR предлагает отличные возможности для независимых поставщиков программного обеспечения с индивидуальными ценами в зависимости от вашего использования.
Простое развертывание
Далее> Далее> Готово! Вам не нужен учебный курс, чтобы запустить сервер InterBase, что делает его идеальным для удаленного и недорогого развертывания.
Быстро и просто
По умолчанию в InterBase включен элемент самонастройки. Но также легко настраивается.
Надежный
Практически полное отсутствие администратора и автоматическое восстановление после сбоев делают его идеальным для встраивания в удаленные развертывания. Даже если сервер выдернут из розетки, InterBase снова появится и уйдет!
Встроенная безопасность пользователя
Легко упустить из виду ценность, которую вы получаете (особенно в сочетании с шифрованием) для управления доступом к базе данных. Безопасность пользователей на основе полного списка упрощает реализацию.
Изменить просмотры
Отслеживание изменений данных с помощью удостоенной награды IoT базы данных.
Запатентованный метод отслеживания изменений позволяет тысячам подключенных устройств отслеживать изменения, даже если они отключены без каких-либо накладных расходов на центральную базу данных! — Никаких журналов изменений, триггеров таблиц или чего-либо еще, что раздувает базу данных и со временем становится трудно управлять!
Богатое аварийное восстановление
Великолепное сочетание функций базы данных для резервного копирования, восстановления, дампа и т. Д., А также еще больше улучшенных за счет уникального использования табличных пространств.
Кросс-платформенная поддержка
Серверные, настольные и мобильные, охватывающие Windows, Linux, macOS, iOS и Android.
Надежное шифрование корпоративного уровня
256-битное шифрование AES на диске в состоянии покоя, прозрачное для клиента, с вариантами шифрования на уровне столбцов с несколькими ключами шифрования — InterBase используется во всем, от финансовых и торговых точек до медицинского программного обеспечения во всем мире для работы с финансовыми, личными и медицинскими данными , позволяя клиентам достичь широкого диапазона отраслевых стандартов управления данными и безопасности.
Тесная интеграция с RAD
С InterBase легко разрабатывать, он широко используется клиентами RAD Studio,
Его открытый API также поддерживает прямое резервное копирование из программного обеспечения, что упрощает выполнение обычных административных задач.
Он также широко используется значительными приложениями .Net и предлагает драйверы .Net, ODBC и JDBC.
Бизнес-модель ISV
В дополнение к вышесказанному, веб-семинар также посвящает время изучению ключевой деятельности типичного независимого поставщика программного обеспечения и другой ценности, которую InterBase делает для них возможной.
Большая часть историй клиентов InterBase, публикуемых на веб-сайте Embarcadero, включает в себя снова и снова, как клиенты говорят об эволюции InterBase, и даже когда они медленнее обновляли свою среду разработки, как InterBase позволила им выпустить новые функции. Такой подход к открытым инновациям позволяет независимым поставщикам программного обеспечения получить реальный инновационный импульс для своих продуктов с помощью основных выпусков InterBase, дополняющих их собственные исследования и разработки.
Хорошим примером этого было добавление прозрачного шифрования на диске несколько лет назад. Эта серверная функция, доступная с помощью простого обновления параметров на стороне клиента, обеспечивала обширную функциональность и соответствие хранилищу данных практически без затрат на НИОКР! Совсем недавно табличные пространства позволили клиентам ускорить работу своего сервера для более крупных развертываний, указав, какая часть базы данных находится на каком диске.
В любом случае, я надеюсь, что вы найдете этот вебинар интересным, и было бы здорово услышать о ваших историях об InterBase !
Вы когда-нибудь заходили на веб-сайт на своем мобильном устройстве и обнаруживали, что он отформатирован для настольного компьютера и почти не читается на 5-дюймовом экране? Подобные проблемы возникают и у пользователей, использующих экраны с высоким разрешением. По мере распространения экранов 4K и роста спроса потребителей на 8K, Важно настроить пользовательские интерфейсы, чтобы формы и элементы управления не становились нечитаемо маленькими на мониторах с высоким разрешением. В RAD Studio 10.3 Rio и Rio Update 2 для решения этой проблемы были введены улучшенные элементы управления для приложений с высоким разрешением, а также Рэй Конопка из Raize Software, Inc. здесь, чтобы научить нас, как максимально использовать их преимущества. Всего за семь дней вы узнаете, как использовать высокий DPI в приложениях VCL, и все разработчики, любители и энтузиасты RAD Studio, желающие получить новые методы, чтобы оставаться актуальными в нашем меняющемся ПО пейзаж.
DelphiCon 2020 предлагает десять выступлений и четыре экспертные панели от технических партнеров Embarcadero и самых ценных профессионалов, охватывающих весь спектр программного обеспечения от образования до доступа к промышленным базам данных. Приходите за знаниями High-DPI и уходите с более глубоким пониманием веб-приложений Delphi. Конференция бесплатна и открыта для публики. Зарегистрируйтесь сейчас, нажав кнопку «Сохранить мое место» на сайте delphicon.embarcadero.com!
Você já acessou um site em seu dispositivo móvel e descobriu que ele foi formatado para desktop e quase ilegível em uma tela de 5 “? Problemas semelhantes ocorrem para usuários que executam telas de alto DPI. Conforme as telas de 4K proliferam e a pressão do consumidor por 8K aumenta, é importante ajustar as interfaces do usuário para evitar que formulários e controles se tornem ilegíveis em monitores de alta resolução. O RAD Studio 10.3 Rio e o Rio Update 2 introduziram controles aprimorados para aplicativos de alto DPI para solucionar esse problema e Ray Konopka da Raize Software, Inc. está aqui para nos ensinar como maximizar suas vantagens. Em apenas sete dias, Leveraging High DPI in VCL Applications é uma palestra obrigatória para todos os desenvolvedores, amadores e entusiastas do RAD Studio que buscam obter novas técnicas para permanecer relevantes em nosso software em constante mudança panorama.
O DelphiCon 2020 oferece dez palestras e quatro painéis de especialistas por parceiros de tecnologia da Embarcadero e Profissionais Mais Valiosos, abrangendo uma gama de software, desde educação até acesso a banco de dados industrial. Venha pelo conhecimento de High-DPI e saia com um maior entendimento dos aplicativos da web Delphi. A conferência é gratuita e aberta ao público. Inscreva-se agora clicando no botão “Salvar meu assento” em delphicon.embarcadero.com!
¿Alguna vez accedió a un sitio web en su dispositivo móvil y descubrió que estaba formateado para escritorio y era casi ilegible en una pantalla de 5 “? Se producen problemas similares para los usuarios que ejecutan pantallas de alto DPI. A medida que proliferan las pantallas 4K y aumenta la presión del consumidor por 8K, Es importante ajustar las interfaces de usuario para evitar que los formularios y los controles se vuelvan increíblemente pequeños en monitores de alta resolución. RAD Studio 10.3 Rio y Rio Update 2 introdujeron controles mejorados para aplicaciones de alto DPI para solucionar este problema y Ray Konopka de Raize Software, Inc. está aquí para enseñarnos cómo aprovechar al máximo sus ventajas. A solo siete días de distancia, Aprovechar el alto DPI en las aplicaciones VCL es una charla obligada para todos los desarrolladores, aficionados y entusiastas de RAD Studio que buscan obtener nuevas técnicas para mantenerse relevantes en nuestro software cambiante paisaje.
DelphiCon 2020 ofrece diez charlas y cuatro paneles de expertos de los socios tecnológicos de Embarcadero y los Profesionales Más Valiosos que abarcan la gama de software desde la educación hasta el acceso a bases de datos industriales. Venga por el conocimiento de High-DPI y salga con una mayor comprensión de las aplicaciones web Delphi. La conferencia es gratuita y abierta al público. Regístrese ahora haciendo clic en el botón “Guardar mi asiento” en delphicon.embarcadero.com.
DelphiFeeds.com был запущен братьями Гурок примерно в 2005 году. С тех пор продукт Gurock TestRail стал действительно популярным, и они были так заняты, что у них больше не было времени на поддержку DelphiFeeds. Он продолжал собирать каналы и публиковать заголовки, но больше не добавлял обновленные источники каналов. Тем временем мы видели, что новые сайты, такие как BeginEnd.net и совсем недавно DelphiMagazine.com, пришли с обновленными списками каналов, но DelphiFeeds продолжал оставаться фактическим источником новостей для многих в сообществе.
Хотя DelphiFeeds никогда не умирал, сегодня он возрождается. Работает на совершенно новом сервере, все старые каналы и несколько обновленных новых. Со временем будут добавляться и обновляться новые источники каналов, а старые удаляться. Ни одна из предыдущих статей о тенденциях или учетных записей пользователей не была перенесена, и регистрация новых учетных записей пользователей еще не включена, но все это и многое другое скоро появится.
Если у вас есть другой новостной сайт, который вы предпочитаете, это нормально, но это не так, тогда я бы порекомендовал вам проверить новый DelphiFeeds и следите за обновлениями и обновлениями!
Desde o lançamento do Delphi e C ++ Builder 10.4, criamos mais maneiras para os desenvolvedores facilmente migrarem e atualizarem seus aplicativos legados existentes do Borland Delphi e C ++ Builder para novas versões modernizadas. Embora existam muitas razões pelas quais alguém deve atualizar para a versão mais recente, às vezes não parece a ideia mais viável. Pode ser uma tarefa difícil quando você considera a migração de seus dados de uma plataforma que funcionou por tantos anos ou uma camada de acesso a dados com a qual você não precisa se preocupar.
Com muitos aplicativos Borland, é comum ter o BDE como a camada de acesso aos dados para aplicativos Delphi e C ++ Builder, mas com o passar do tempo, o BDE ficou no passado. As tecnologias mais recentes se juntaram às fileiras e ultrapassaram as capacidades do BDE de 32 bits que todos nós amamos.
A primeira etapa para remover o BDE de seu aplicativo é determinar os componentes de acesso a dados pelos quais você deseja substituí-lo. Existem algumas opções que você pode usar, incluindo UniDAC e IBeXpress. Neste artigo, estarei usando FireDAC, o conjunto de componentes de acesso a dados incluído nas edições RAD Studio, Delphi e C ++ Builder Enterprise e Architect. Para obter mais informações sobre FireDAC, verifique o FireDAC Docwiki.
Migração de BDE para FireDAC
Alterar as estruturas que você usa para acessar seus dados ficou mais fácil ao longo dos anos. Agora temos ferramentas como reFind (específico para Delphi) para BDE e DBExpress, ferramenta de migração BDE para FireDAC do Delphi Parser para C ++ Builder e Delphi, e muitas outras ferramentas disponíveis para migrações. Confira este vídeo abaixo, onde mostramos como usar a ferramenta reFind para migrar dos componentes BDE em um formulário para o FireDAC.
Fontes de dados BDE para InterBase
Depois de migrar seus componentes em seu aplicativo, em alguns casos, você pode ficar com o banco de dados que possui. Embora FireDAC suporte Paradox e outros bancos de dados de desktop por meio de ODBC, ele contém vários drivers que permitem a conexão a um grande número de bancos de dados, como Oracle, DB2, MySQL, MSSQL, InterBase / Firebird, etc.
Embora você possa usar os recursos de importação e exportação de dados do InterBase combinados com uma ferramenta de design de banco de dados que inverte e encaminha o esquema de seu banco de dados, existem ferramentas disponíveis que ajudam a reduzir o incômodo neste processo. Uma dessas ferramentas é o InterBase Datapump (freeware). Uma ferramenta que utilizo ao trabalhar com clientes para migrar fontes de dados BDE para bancos de dados InterBase.
Existem muitos recursos disponíveis para equipes que procuram migrar de tecnologias mais antigas, verifique o Embarcadero Upgrade and Migration Center hoje para ver o quão fácil pode ser a mudança para modernizar e atualizar seus aplicativos Delphi e C ++ Builder legados.
С момента выпуска Delphi и C ++ Builder 10.4 мы создали больше способов для разработчиков, позволяющих легко переносить и обновлять свои существующие устаревшие приложения с Borland Delphi и C ++ Builder на новые модернизированные версии. Хотя есть много причин, по которым следует перейти на более новую версию, иногда это не кажется наиболее осуществимой идеей. Это может быть непростой задачей, если вы думаете о переносе данных с платформы, которая проработала столько лет, или уровня доступа к данным, о котором вам не нужно было беспокоиться.
Во многих приложениях Borland было обычным делом использовать BDE в качестве уровня доступа к данным для приложений Delphi и C ++ Builder, но со временем BDE остался в прошлом. Новые технологии пополнили ряды и превзошли возможности 32-битного BDE, который нам всем нравился.
Первым шагом к удалению BDE из вашего приложения является определение компонентов доступа к данным, которыми вы хотите его заменить. Есть несколько вариантов, которые вы можете использовать, включая UniDAC и IBeXpress. В этой статье я буду использовать FireDAC, набор компонентов доступа к данным, включенный в редакции RAD Studio, Delphi и C ++ Builder Enterprise и Architect. Для получения дополнительной информации о FireDAC ознакомьтесь с FireDAC Docwiki.
Переход с BDE на FireDAC
Изменение фреймворков, которые вы используете для доступа к своим данным, с годами стало проще. Теперь у нас есть такие инструменты, как reFind (специфичный для Delphi) для BDE и DBExpress, инструмент миграции Delphi Parser из BDE в FireDAC для C ++ Builder и Delphi, а также многие другие инструменты, доступные для миграции. Посмотрите это видео ниже, где мы покажем вам, как использовать инструмент reFind для перехода от компонентов BDE в форме к FireDAC.
Источники данных BDE для InterBase
После переноса компонентов в приложение в некоторых случаях можно остаться с имеющейся базой данных. Хотя FireDAC поддерживает Paradox и другие настольные базы данных через ODBC, он содержит несколько драйверов, которые позволяют подключаться к большому количеству баз данных, таких как Oracle, DB2, MySQL, MSSQL, InterBase / Firebird и т. Д.
Хотя вы можете использовать функции импорта и экспорта данных InterBase в сочетании с инструментом проектирования базы данных, который реверсирует и пересылает инженеры вашей схемы базы данных, существуют доступные инструменты, которые помогают уменьшить хлопоты в этом процессе. Одним из таких инструментов является InterBase Datapump (бесплатное ПО). Инструмент, который я использую при работе с клиентами для миграции источников данных BDE в базы данных InterBase.
Командам, желающим перейти на более старые технологии, доступно множество ресурсов. Посетите Центр обновления и миграции Embarcadero сегодня, чтобы увидеть, насколько простым может быть переход на модернизацию и обновление устаревших приложений Delphi и C ++ Builder.
Desde el lanzamiento de Delphi y C ++ Builder 10.4, hemos creado más formas para que los desarrolladores migren y actualicen fácilmente sus aplicaciones heredadas existentes de Borland Delphi y C ++ Builder a nuevas versiones modernizadas. Si bien hay muchas razones por las que uno debería actualizar a la versión más reciente, a veces no parece la idea más factible. Puede ser una tarea abrumadora si considera migrar sus datos desde una plataforma que ha funcionado durante tantos años o una capa de acceso a datos de la que no ha tenido que preocuparse.
Con muchas aplicaciones de Borland, ha sido común tener el BDE como la capa de acceso a datos para las aplicaciones Delphi y C ++ Builder, pero con el tiempo, el BDE se ha quedado en el pasado. Las tecnologías más nuevas se han unido a las filas y han superado las capacidades del BDE de 32 bits que todos amamos.
El primer paso para eliminar el BDE de su aplicación es determinar los componentes de acceso a datos con los que desea reemplazarlo. Hay algunas opciones que puede usar, incluidas UniDAC e IBeXpress. En este artículo, usaré FireDAC, el conjunto de componentes de acceso a datos incluido en las ediciones RAD Studio, Delphi y C ++ Builder Enterprise y Architect. Para obtener más información sobre FireDAC, consulte FireDAC Docwiki.
Migración de BDE a FireDAC
Cambiar los marcos que usa para acceder a sus datos se ha vuelto más fácil a lo largo de los años. Ahora tenemos herramientas como reFind (específico de Delphi) para BDE y DBExpress, la herramienta de migración de BDE a FireDAC de Delphi Parser para C ++ Builder y Delphi, y muchas otras herramientas disponibles para migraciones. Vea este video a continuación, donde le mostramos cómo usar la herramienta reFind para migrar desde los componentes BDE en un formulario a FireDAC.
Fuentes de datos BDE a InterBase
Una vez que haya migrado sus componentes en su aplicación, en algunos casos, puede permanecer con la base de datos que tiene. Si bien FireDAC admite Paradox y otras bases de datos de escritorio a través de ODBC, contiene varios controladores que le permiten conectarse a una gran cantidad de bases de datos como Oracle, DB2, MySQL, MSSQL, InterBase / Firebird, etc.
Si bien puede utilizar las funciones de importación y exportación de datos de InterBase combinadas con una herramienta de diseño de base de datos que revierte y reenvía el esquema de su base de datos, existen herramientas disponibles que ayudan a reducir la molestia en este proceso. Una de estas herramientas es InterBase Datapump (software gratuito). Una herramienta que utilizo cuando trabajo con clientes para migrar fuentes de datos BDE a bases de datos de InterBase.
Hay muchos recursos disponibles para los equipos que buscan migrar fuera de tecnologías más antiguas, consulte el Centro de actualización y migración de Embarcadero hoy mismo para ver cuán fácil podría ser el movimiento para modernizar y actualizar sus aplicaciones heredadas de Delphi y C ++ Builder.
Seit der Veröffentlichung von Delphi und C++Builder 10.4 haben wir mehr Möglichkeiten für Entwickler geschaffen, ihre vorhandenen Legacy-Anwendungen von Borland Delphi und C++Builder auf einfache Weise auf neue modernisierte Versionen zu migrieren und zu aktualisieren. Obwohl es viele Gründe gibt, warum man auf die neuere Version upgraden sollte, klingt es manchmal nicht nach der praktikabelsten Idee. Es kann eine entmutigende Aufgabe sein, wenn Sie erwägen, Ihre Daten von einer Plattform zu migrieren, die so viele Jahre lang funktioniert hat, oder von einer Datenzugriffsschicht, um die Sie sich keine Sorgen machen mussten.
Bei vielen Borland-Anwendungen war es bisher üblich, die BDE als Datenzugriffsschicht für Delphi- und C++Builder-Anwendungen zu verwenden, aber im Laufe der Zeit ist die BDE in der Vergangenheit geblieben. Neuere Technologien sind hinzugekommen und haben die Fähigkeiten der 32-Bit-BDE, die wir alle geliebt haben, übertroffen.
Der erste Schritt zum Entfernen der BDE aus Ihrer Anwendung besteht darin, die Datenzugriffskomponenten zu bestimmen, durch die Sie die BDE ersetzen möchten. Es gibt ein paar Optionen, die Sie verwenden können, darunter UniDAC und IBeXpress. In diesem Artikel werde ich FireDAC verwenden, den Datenzugriffskomponentensatz, der in den Editionen RAD Studio, Delphi und C++Builder Enterprise und Architect enthalten ist. Weitere Informationen zu FireDAC finden Sie im FireDAC Docwiki.
BDE zu FireDAC-Migration
Das Ändern der Rahmenbedingungen, die Sie für den Zugriff auf Ihre Daten verwenden, ist im Laufe der Jahre einfacher geworden. Jetzt haben wir Werkzeuge wie reFind (Delphi-spezifisch) für BDE und DBExpress, Delphi Parser’s BDE zu FireDAC Migrationswerkzeug für C++Builder und Delphi, und viele andere Werkzeuge für Migrationen verfügbar. Sehen Sie sich dieses Video unten an, in dem wir Ihnen zeigen, wie Sie das reFind-Tool verwenden, um von den BDE-Komponenten auf einem Formular nach FireDAC zu migrieren.
BDE-Datenquellen zu InterBase
Nachdem Sie Ihre Komponenten in Ihrer Anwendung migriert haben, können Sie in einigen Fällen bei der vorhandenen Datenbank bleiben. Obwohl FireDAC Paradox und andere Desktop-Datenbanken über ODBC unterstützt, enthält es mehrere Treiber, die es Ihnen ermöglichen, eine Verbindung zu einer großen Anzahl von Datenbanken wie Oracle, DB2, MySQL, MSSQL, InterBase/Firebird usw. herzustellen.
Während Sie die InterBase-Funktionen zum Importieren und Exportieren von Daten in Kombination mit einem Datenbank-Design-Tool verwenden können, das Ihr Datenbankschema rückwärts und vorwärts entwirft, gibt es Tools, die Ihnen helfen, den Aufwand in diesem Prozess zu verringern. Eines dieser Tools ist das InterBase Datapump (Freeware). Ein Tool, das ich bei der Arbeit mit Kunden verwende, um BDE-Datenquellen in InterBase-Datenbanken zu migrieren.
Für Teams, die von älteren Technologien migrieren möchten, stehen zahlreiche Ressourcen zur Verfügung. Informieren Sie sich noch heute im Embarcadero Upgrade and Migration Center, wie einfach die Modernisierung und Aktualisierung Ihrer älteren Delphi- und C++Builder-Anwendungen sein könnte.
В течение последних лет компания Embarcadero обновляет версии своих инструментов для разработчиков довольно регулярно: 2 раза в год, как минимум. С помощью продляемой подписки на обновления наши пользователи сразу получают самые последние релизы своих продуктов, одновременно продолжая свои проекты на той же или даже более старой версии. Но рано или поздно наступает момент, когда для реализации новых задумок или требований пользователей имеющихся возможностей инструментов уже не хватает. В этот момент очень важно иметь дополнительные ориентиры для оптимального планирования этих проектов как по времени, так и по инструментам их реализации.
Понимая необходимость этого, наша компания регулярно информирует о планах по развитию наших инструментов и публикует «дорожную карту» их развития. Создание такой карты довольно сложное дело, так как требуется учесть интересы и запросы большинства наших пользователей по всему миру, многие из которых являются крупными компаниями с налаженным и успешным бизнесом, и верно расставить приоритеты по различным средствам и регионам. Хотя такие карты являются протоколами о намерении, они все же очень важны для планирования, особенно для нашей страны с долгим циклом финансирования.
Полезной традицией стали личные комментарии к опубликованным планам менеджеров по продуктам — Марко Канту, Дэвида Миллингтона и Сарины Дюпон. https://blogs.embarcadero.com/rad-studio-november-2020-roadmap-pm-commentary/ Наряду с официальными слайдами дорожной карты, они представляют больше подробностей, информации и понимания в этом дополнительном посте блога.
Ниже мне хотелось бы привести главное из высказанного ими.
Что доступно на данный момент
В начале этого года мы выпустили Сидней 10.4. Релиз 10.4 был очень хорошо принят нашими клиентами и включал в себя первую поставку созданного заново движка Delphi Code Insight, теперь основанного на архитектуре протокола Language Server (LSP), нового отладчика для C++ Win64, с медленным откликом на действия клиента, и новые возможности языка Delphi, такие как пользовательские управляемые записи. Мы также значительно расширили возможности использования VCL-стилей с поддержкой HighDPI-мониторов и стилизации под управление.
В продолжение релиза 10.4 в в сентябре 2020 года мы выпустили версию 10.4.1, которая была в основном сфокусирована на качестве и дальнейшем улучшении функций, поставляемых в 10.4, в частности, на недавно добавленной поддержке Delphi LSP. 10.4.1 включает более 800 улучшений качества, в том числе более 500 улучшений качества для публичных выпусков на портале качества Embarcadero.
Прежде чем перейти к подробностям о релизах 10.4.2 и 10.5, мы хотели бы подчеркнуть, что 10.4/10.4.1 был одним из самых популярных наших релизов на сегодняшний день, с большим количеством скачиваний по сравнению с 10.3 и 10.2. Это особенно впечатляет в разгар КОВИД-19. В настоящее время мы работаем над релизом 10.4.2, запланированным на первую половину 2021 календарного года и подробно описанным в дорожной карте. За некоторое время до релиза мы собираемся пригласить клиентов RAD Studio, Delphi и C++Builder, имеющих активную подписку на обновления, присоединиться к бета-тестированию предстоящего релиза. Возможность присоединиться к бета-тестированию и быстрой обратной связи с разработчиками продукта на ранней стадии цикла разработки является одним из преимуществ подписки на обновления.
Разбор кода
В 10.4.2, одним из важных направлений работы является новая версия Delphi Code Insight (он же Delphi LSP). В 10.4.2 будет включено не только множество исправлений и тонкостей, но и менее распространенные функции понимания кода, которые мы не включали в первоначальную версию, а также некоторые новые или значительно улучшенные функции Code Completion. Например, мы планируем добавить, в том числе, ctrl-клик на ‘унаследованное’, а также переработанное завершение в пунктах ‘Uses’.
Наиболее заметным для всех клиентов C++ будет полная ревизия Code completion C++ кода при использовании компиляторов Clang. В 10.3, когда мы перешли на C++17, нам пришлось заменить технологию завершения кода, используемую IDE. Для версии 10.4.2 мы решили провести полный капитальный ремонт Code Completion на С++, чтобы обеспечить необходимую производительность разработчику. Клиенты на С++ должны понимать, что доработка кода и навигация по нему работают надежно и хорошо. Мы даже рассматривали некоторые более сложные случаи, например, обеспечение доработки в заголовочном файле (гораздо сложнее, чем в .cpp-файле)! Конечным результатом должно быть все, о чем вы нас просили.
Обработка исключений C++
Обработка исключений — это сложная область, требующая тесного взаимодействия между компилятором и RTL для корректной работы. Существуют общие конвенции для исключений, такие как никогда не допускать пересечение границы модуля (например, быть поднятым в DLL, но попавшим в EXE), но они не всегда выполняются, иногда по уважительным причинам. В Си++ нам нужно работать с исключениями на Си++, исключениями из ОС и SEH, не забывая при этом и об обработке исключений в Delphi.
В 10.4.2 мы переработали систему обработки исключений. По мере приближения к релизу мы выпустим в блоге подробное описание сценариев, которые мы поддерживаем. Уже сейчас видны некоторые отличные улучшения!
IDE и инструментарий
10.4.2 запланирована работа по улучшению IDE
В этом релизе мы добавляем третий стиль, который использует традиционный серый, а не синий для основных цветов. Это может быть полезно для тех, кому требуются особые условия по зрению.
Мы также планируем дальнейшее улучшение настольных шаблонов, мульти-мониторных шаблонов, редакторов форм и аналогичных областей. Это включает в себя возможность работы в дизайнере формы одновременно с открытым редактором кода для этой формы
Наконец, мы хотим усовершенствовать инструмент миграции Settings Migration Tool, чтобы помочь перенести настройки RAD Studio из версии в версию (например, из 10.3 в 10.4) и лучше сохранить вашу конфигурацию при переходе к выпуску обновления (например, из 10.4.1 в 10.4.2). Мы планируем добавлять конкретные предустановленные конфигурации для каждого сценария и включать конфигурационные файлы в дополнение к настройкам реестра, рассматриваемым сегодня.
Поддерживаемые платформы
Что касается целевых операционных систем, то в настоящее время мы сосредоточены на совершенствовании существующих платформ, поддерживаемых в 10.4.x, и есть два направления, над которыми мы работаем в 10.4.2.
Первое — это часть нашего постоянного внимания к Windows как нашей основной целевой ОС. В операционной системе Microsoft мы внимательно следим за текущим направлением деятельности Microsoft по объединению WinRT API и традиционного Win API, через Project Reunion. Project Reunion (https://github.com/microsoft/ProjectReunion) включает в себя различные технологии, начальными из которых являются WinUI 3, WebView2 и MSIX.
Элемент управления WebView2 — это новый Windows-платформенный компонент, встраивающий Edge Chromium. Мы предоставили поддержку этой функции в 10.4 с VCL компонентом TEdgeBrowser.
Еще одной частью релиза будет поддержка MSIX-формата упаковки, который мы планируем включить в 10.4.2. MSIX является преемником APPX, цели, которую мы в настоящее время предлагаем в рамках интеграции RAD Studio IDE Desktop Bridge, и предназначен для Microsoft Store и для развертывания на предприятии.
Мы работаем над новыми VCL Native Windows Controls, чтобы вы могли предоставить своим клиентам более современный пользовательский интерфейс:
Один из них — оптимизированный виртуальный список, который позволит вам отображать большое количество элементов с гибкой комбинацией текста и графики. Управление будет свободно основываться на подходе существующего управления DBCtrlGrid, но без необходимости использования источника данных. Он будет поддерживать использование Live Bindings.
Другой новый компонент VCL, который мы добавляем, это контрол цифрового ввода, аналогичный контролу NumberBox платформы WinUI. Этот элемент управления обеспечивает более легкий и плавный цифровой ввод, учитывая различные форматы (целые числа, числа с плавающей точкой, валютные значения), а также включая простую оценку выражений.
Совместимость с новыми версиями операционных систем
Мы планируем обеспечить полную совместимость с новыми версиями операционных систем, выпущенными Apple и Google после выхода RAD Studio 10.4.1. Несмотря на то, что сегодня можно ориентироваться на эти платформы, есть несколько открытых проблем, которые мы хотим решить должным образом (а не через обходные пути). Цель состоит в том, чтобы иметь полную поддержку:
iOS 14 и iPadOS 14 (Delphi и C++).
macOS 11.0 Big Sur (Intel) (Delphi)
Android 11 (Delphi)
Работа над повышением качества
Что касается качества, то в 10.4.2 (как и в 10.4.1) мы продолжим прилагать значительные усилия по обеспечению стабильности, производительности и качества. Мы планируем решение проблем, о которых сообщали клиенты, и эскалацию поддержки во многих областях продукта. Инструменты и библиотеки, на которых мы особенно сосредоточимся, в дополнение к тем, которые были перечислены ранее, включают в себя:
Компилятор Delphi (для всех платформ) для повышения его надежности и обратной совместимости, но особое и глубокое внимание уделяется производительности компилятора (и компоновщика) для сокращения времени компиляции больших проектов, а также ускорению LSP-движка (который использует компилятор для анализа исходного кода).
Библиотека клиента SOAP вместе с инструментом импорта WSDL, который генерирует клиентский код, используемый для интерфейса с серверами SOAP.
Библиотека параллельного программирования (PPL), которая предлагает отличную абстракцию различных платформ и многоядерных потоковых возможностей CPU, с точки зрения задач, фьючерсов и параллельных циклов
Многоуровневые решения для веб-сервисов, входящие в состав RAD Studio, с усовершенствованиями как RAD Server, так и старого движка DataSnap, а также общими усовершенствованиями клиентских библиотек HTTP и REST. Мы также продолжим фокусироваться на поддержке Azure и AWS Cloud.
Дополнительное внимание будет уделено VCL-стилям и HighDPI-стилям, а также VCL в целом.
Для библиотеки FireMonkey мы продолжаем улучшать компоненты TMemo (как в платформе, так и в стилевых версиях), драйвер библиотеки Metal GPU, представленный в 10.4, и переделывать управление датчиками на Android, чтобы обеспечить лучшую поддержку для различных Android-устройств.
Планируемое по 10.5
Для 10.5 запланирована реализация замечательных новых функций, которые с нетерпением ожидают многие клиенты.
В течение создания 10.5 мы планируем внедрить новую целевую платформу для Delphi — macOS ARM (на базе процессоров Apple Silicon), значительную работу по поддержке IDE HighDPI, расширения инструментальной цепочки C++, а также многие другие дополнительные функции и улучшения качества
Что касается платформы Windows, то, как уже упоминалось ранее, мы планируем предложить поддержку различных технологий, входящих в состав Microsoft Project Reunion. В частности, в релизе RAD Studio 10.5 мы планируем интегрировать поддержку современных Windows UX через библиотеку WinUI 3. Согласно дорожной карте Microsoft для библиотеки, должна появиться возможность использовать компоненты этой библиотеки в нативном приложении, основанном на классическом API, смешивая формы и элементы управления различных типов. Реальные детали будут зависеть от того, что библиотека будет предоставлять в плане интеграции с нативными приложениями, но наш текущий план состоит в том, чтобы интегрировать эту библиотеку в VCL с новыми специфическими элементами управления.
Говоря о платформах, мы хотим добавить новую ОС для приложений Delphi: новый компилятор для ARM-версии операционной системы macOS с аппаратным обеспечением Apple на базе процессоров Apple Silicon. Несмотря на то, что вы можете запускать приложения Intel, цель состоит в том, чтобы иметь родное приложение ARM для нового поколения Mac. Это будет значительное расширение Delphi, включая новый компилятор, обновления библиотеки времени исполнения и различные библиотеки высокого уровня. Мы также планируем расширить синтаксис языка Delphi.
Планируется полная поддержка HighDPI в IDE. Уже в паре релизов VCL поддерживает HighDPI, теперь будет поддерживать HighDPI и IDE RAD Studio, которая в основном использует VCL. Это гарантирует четкую визуализацию на всех современных экранах с высоким разрешением, в том числе при перемещении окон по экранам с различными разрешениями и масштабами.
Дизайнер форм VCL является одним из ключевых инструментов, который вы используете при создании приложения. Смысл дизайнера заключается в том, чтобы быстро построить интерфейс, приближенный к тому, как он будет выглядеть при работе приложения, в отличие от инструментов, которые описывают интерфейс только в тексте и не обеспечивают немедленной обратной связи / цикла итераций. В 10.5, мы планируем расширить этот элемент, чтобы проект выглядел похожим на то, как будет выглядеть ваше приложение при работе, добавив поддержку VCL-стиля в конструктор, так что когда любой из ваших элементов управления будет стилизован, вы увидите, что они также стилизованы и в конструкторе.
Дизайнер форм FMX также является ключевым инструментом, когда вы строите кроссплатформенное приложение. Мы планируем внести некоторые инструменты проектирования из дизайнера VCL, такие как руководства по выравниванию, чтобы убедиться, что дизайнер имеет функции производительности, которые вам нужны.
Мы также планируем сконцентрироваться на интеграции IDE с системами управления исходными кодами, чтобы облегчить коллективную работу вашей команде разработчиков. Кроме того, мы планируем некоторые улучшения в том, как IDE будет представлена при первом запуске, чтобы помочь новичкам в Delphi и C++Builder начать работу.
Наконец, многие клиенты используют Delphi или C++Builder на выделенном сборочном сервере. Наряду с управлением исходными кодами, тестированием и подобной практикой, хорошей практикой является официальная сборка на конкретной машине или виртуальной машине. В настоящее время для установки RAD для сборочного сервера необходимо установить полную IDE — но это не должно требоваться, так как для сборки нужны только инструменты командной строки. Мы планируем сценарий установки специально для сборочных серверов.
C++Builder
В 10.5 мы планируем полную замену другому базовому инструменту — компоновщику. Как и отладчик, это будет для Win64.
Вы заметите ориентацию на 64-битную Windows в C++Builder. Многие клиенты используют Clang для работы с Win64, и мы хотим, чтобы наши инструменты были на уровне или лучше, чем та, к которой вы можете привыкнуть из классического компилятора. Кроме того, многие начинают смотреть исключительно на 64-битные приложения, при этом 32-битные приложения обновляются, а новые — только на 64-битные.
Visual Assist — удивительное расширение производительности для Visual C++, дающее завершение кода, рефакторинг и многое другое. Мы исследовали различные способы его интеграции в C++Builder и планируем сделать это в 10.5.
Наконец, мы также планируем улучшить взаимодействие с Delphi/C++. Возможность использования двух языков — это большой прирост производительности и одна из ключевых причин использования C++Builder или RAD Studio. Планируется дополнительная работа для оттачивания этой интеграции. Это должно обеспечить более плавную интеграцию с функциями RTL.
Delphi отладчик
В 10.4 мы представили совершенно новый отладчик для C++ Win64, основанный на LLDB. В конечном итоге, мы стремимся использовать один и тот же отладчик на всех платформах — сегодня мы используем смесь различных отладчиков. Ключом к этому является добавление к LLDB фронтенда языка Delphi, который позволяет оценить синтаксис Delphi, например, в диалоге Evaluate/Modify. Мы планируем представить первую платформу, использующую LLDB, с этим новым фронтендом в 10.5.
Резюме
У нас большие планы на ближайшие релизы Delphi, C++Builder и RAD Studio! Мы будем активно работать, чтобы сделать задуманное доступным пользователям!
Команда управления продуктами Embarcadero RAD Studio регулярно обновляет план разработки продуктов для Delphi, C ++ Builder и RAD Studio. Как вы можете видеть в нашем блоге с официальной дорожной картой, мы только что выпустили новую версию дорожной карты, охватывающую ключевые функции, которые мы запланировали на следующие 12 месяцев. Наряду с официальными слайдами дорожной карты мы хотели предложить более подробную информацию, информацию и понимание в этом дополнительном сообщении в блоге. Возможно, вам будет полезно открыть слайды для справки при чтении расширенной информации, которую мы предоставляем здесь.
В нашей дорожной карте вы можете найти ключевые функции, которые мы запланировали на 2021 календарный год. Прежде чем мы перейдем к деталям нашей обновленной дорожной карты, мы хотим подвести итоги того, что мы сделали на данный момент.
В начале этого года мы выпустили 10.4 Sydney. Релиз 10.4 был очень хорошо принят нашими клиентами и включал в себя первую поставку нашего полного переписанного механизма Delphi Code Insight, теперь основанного на архитектуре протокола Language Server, нового отладчика для C ++ Win64, который отвечал на очень давний запрос клиентов. и новые функции языка Delphi, такие как настраиваемые управляемые записи. Мы также значительно расширили возможности использования стилей VCL с поддержкой мониторов HighDPI и стилями для каждого элемента управления.
Мы продолжили выпуск 10.4 Sydney с выпуском 10.4.1 в сентябре 2020 года, который был в первую очередь ориентирован на качество и дальнейшие улучшения функций, представленных в 10.4, в частности, недавно добавленную поддержку Delphi LSP. 10.4.1 включает более 800 улучшений качества, в том числе более 500 улучшений качества для публично зарегистрированных проблем на портале качества Embarcadero.
Прежде чем мы углубимся в подробности выпусков 10.4.2 и 10.5, мы хотели подчеркнуть, что 10.4 / 10.4.1 был одним из наших самых популярных выпусков на сегодняшний день, с большим количеством загрузок по сравнению с 10.3 и 10.2. Это особенно впечатляет в разгар COVID-19. Мы связываем успех как с выпуском, так и с нашим более активным взаимодействием с технологическими партнерами, которые продолжают сотрудничать с нашей командой для обновления и разработки новых функций. Мы также хотим поблагодарить вас, наших клиентов, за отличную обратную связь, которую вы предоставили команде продукта, с точки зрения функций, которые вы хотели бы добавить, и областей, на которых мы хотели бы сосредоточить внимание для качества, а также ваше участие. в наших бета-программах (что является одним из преимуществ подписки).
В настоящее время мы работаем над выпуском 10.4.2, который запланирован на первую половину календарного 2021 года и подробно описан в дорожной карте и в этом комментарии в блоге. За некоторое время до выпуска мы планируем пригласить клиентов RAD Studio, Delphi и C ++ Builder с активной подпиской на обновления для участия в бета-тестировании предстоящего выпуска. Бета-версия 10.4.2 будет бета-версией NDA, требующей от участников подписать наше соглашение о неразглашении перед тем, как принять участие в бета-тестировании. Возможность присоединиться к бета-версиям и участвовать в предоставлении обратной связи руководству продукта на ранних этапах цикла разработки — одно из преимуществ актуальности подписки на обновления.
В версии 10.5 мы планируем представить новую целевую платформу для Delphi, macOS ARM (на базе процессоров Apple Silicon), значительный объем работ по поддержке IDE HighDPI, расширения набора инструментов C ++ и многие другие дополнительные функции и улучшения качества. Подробности смотрите ниже.
График разработки плана развития RAD Studio
Прежде чем мы перейдем к деталям функций, над которыми команда разработчиков работает сегодня или изучает будущее, давайте взглянем на график следующих выпусков, как показано на основном слайде дорожной карты:
Комментарий Дэвида к планам 10.4.2
Анализ кода Delphi
Как писал Марко выше, в 10.4.1 мы уделяли большое внимание качеству. Это продолжается в 10.4.2, при этом одной из важных областей работы является новый Delphi Code Insight (он же Delphi LSP). Не только 10.4.2 будет включать в себя множество исправлений и настроек, а также менее часто используемые функции анализа кода, которые мы не использовали. включить в первоначальную версию, но также некоторые новые или значительно улучшенные функции автозавершения кода. Например, мы планируем добавить «унаследованный» с нажатой клавишей Ctrl, а также переработанное завершение в предложениях «использует» среди прочего.
C ++ Code Insight
C ++ также продолжает тему непрерывной качественной работы с акцентом на две области.
Наиболее заметным для всех пользователей C ++ будет полная версия автозавершения кода C ++ при использовании компиляторов Clang. В версии 10.3, когда мы обновились до C ++ 17, нам пришлось заменить технологию завершения кода, которую использовала IDE. С момента его появления мы улучшали автозавершение кода в каждом выпуске, обращаясь к случаям использования, когда определенные шаблоны кода или настройки проекта могли вызвать проблемы с завершением.
В версии 10.4.2 мы решили провести полную переработку автозавершения кода для C ++, чтобы обеспечить необходимую продуктивность разработчика. Пользователи C ++ должны убедиться, что автозавершение кода и навигация работают надежно и хорошо.
Мы даже занимались некоторыми более сложными случаями, такими как обеспечение завершения в файле заголовка (намного сложнее, чем в файле .cpp)! Конечным результатом должно быть все, о чем вы нас просили.
Качество C ++ и обработка исключений
Другой важный аспект качества C ++ в 10.4.2 — обработка исключений.
Обработка исключений — сложная область, требующая тесного взаимодействия между компилятором и RTL для правильной работы. Существуют общие соглашения об исключениях, такие как недопущение того, чтобы исключение могло пересекать границу модуля (например, быть брошенным в DLL, но пойманным в EXE), но они не всегда соблюдаются, иногда по уважительным причинам. В C ++ нам нужно обрабатывать исключения C ++, исключения ОС и SEH, не забывая также об обработке исключений Delphi.
В 10.4.2 мы пересмотрели систему обработки исключений. По мере того, как мы приближаемся к выпуску, ждите сообщения в блоге, в котором подробно описаны поддерживаемые нами сценарии. Внутри мы сейчас наблюдаем отличные улучшения!
IDE
Помимо акцента на автозавершении кода для обоих языков, в среде IDE запланирована и другая работа в 10.4.2.
В 10.3 мы представили две текущие темы, светлый и темный стиль (на самом деле темный стиль, хотя и значительно отличающийся, был впервые представлен в 10.2.3. Это была одна из наших самых популярных функций.) Светлый стиль преимущественно бледно-синий. В этом выпуске мы добавляем третий стиль, в котором в качестве основных цветов используется традиционный серый, а не синий. Считайте это ретро-стилем для тех, кому нравится, как среда IDE выглядела в эпоху 2010-XE7, возврат к классическому виду. Мы также считаем, что это может быть полезно для тех, кому требуется особое зрение.
Мы также планируем дальнейшую качественную работу над макетами рабочего стола, макетами с несколькими мониторами, проектированием форм и аналогичными областями. Это включает в себя возможность создавать форму в конструкторе форм одновременно с открытием кода этой формы. Судя по отзывам, это была наиболее частая причина использования старого незакрепленного конструктора, который мы удалили в 10.4.1, и мы рады позволить вам кодировать и проектировать в модуле формы с использованием современного конструктора.
Наконец, мы хотим улучшить инструмент переноса настроек, чтобы помочь перенести настройки RAD Studio от версии к версии (например, с 10.3 на 10.4) и лучше сохранить вашу конфигурацию при переходе к выпуску обновления (например, с 10.4.1 на 10.4.2). ). Мы планируем добавить определенные предустановленные конфигурации для каждого сценария и включить файлы конфигурации в дополнение к параметрам реестра, которые рассматриваются сегодня.
Комментарий Марко к планам 10.4.2
Что касается целевых операционных систем , мы в настоящее время сосредоточены на улучшении существующих платформ, поддерживаемых в 10.4.x, и есть две основные области, над которыми мы работаем для 10.4.2.
Первый — это часть нашего постоянного внимания к Windows как к нашей основной цели. В операционной системе Microsoft мы внимательно следим за текущим направлением Microsoft по унификации WinRT API и традиционного Win API через Project Reunion. Project Reunion ( https://github.com/microsoft/ProjectReunion ) включает в себя различные технологии, первыми из которых являются WinUI 3, WebView2 и MSIX. Элемент управления WebView2 — это новый компонент платформы Windows, в который встроен Edge Chromium. Мы предоставили поддержку этой функции в 10.4 с компонентом TEdgeBrowser VCL.
Вторым строительным блоком будет поддержка формата упаковки MSIX, который мы планируем для 10.4.2. MSIX является преемником APPX, цели, которую мы в настоящее время предлагаем в рамках интеграции RAD Studio IDE Desktop Bridge, и предназначен для Microsoft Store и для корпоративного развертывания.
Еще одна область нашей улучшенной поддержки платформы Windows, в частности приложений VCL, — это добавление двух новых элементов управления, призванных помочь нашим клиентам модернизировать и улучшить UX своих приложений. Мы работаем над новыми встроенными элементами управления Windows VCL, чтобы вы могли предоставить своим клиентам более современный интерфейс:
Один из них — это высокооптимизированный виртуальный список, который позволит вам отображать большое количество элементов с гибким сочетанием текста и графики. Элемент управления будет основываться на подходе существующего элемента управления DBCtrlGrid, но без использования источника данных. Он будет поддерживать использование Live Bindings.
Другой новый компонент VCL, который мы добавляем, — это элемент управления числовым вводом, аналогичный элементу управления платформы WinUI NumberBox. Этот элемент управления обеспечивает более простой и плавный ввод числовых значений с учетом различных форматов (целые числа, числа с плавающей запятой, денежные значения), а также включает оценку простых выражений.
Вторая часть наших улучшений ориентирована на поддерживаемые в настоящее время целевые платформы. Мы планируем обеспечить полную совместимость с новыми версиями операционных систем, выпущенных Apple и Google после выхода RAD Studio 10.4.1. Хотя вы можете настроить таргетинг на эти платформы сегодня, есть несколько открытых проблем, которые мы хотим решить должным образом (а не с помощью обходных путей). Цель состоит в том, чтобы получить полную поддержку:
iOS 14 и iPadOS 14 (Delphi и C ++)
macOS 11.0 Big Sur (Intel) (Delphi)
Android 11 (Delphi)
Что касается качества , мы продолжим прилагать значительные усилия к стабильности, производительности и качеству в 10.4.2 (как мы это делали в 10.4.1). Мы планируем устранять проблемы, о которых сообщают клиенты, и расширять поддержку во многих областях продукта. Инструменты и библиотеки, на которых мы особенно сосредоточимся, в дополнение к перечисленным Дэвидом ранее, включают:
Компилятор Delphi (для всех платформ) для повышения его надежности и обратной совместимости, но особое внимание уделяется производительности компилятора (и компоновщика), чтобы сократить время компиляции для больших проектов, а также ускорить механизм LSP (который использует компилятор для проанализировать исходный код).
Клиентская библиотека SOAP вместе с инструментом импорта WSDL, который генерирует клиентский код, используемый для взаимодействия с серверами SOAP.
Библиотека параллельного программирования (PPL), которая предлагает отличную абстракцию различных платформ и возможностей многоядерных процессоров для многопоточности с точки зрения задач, будущего и параллельных циклов.
Решения многоуровневых веб-сервисов, входящие в состав RAD Studio, с улучшениями как для сервера RAD, так и для более старого механизма DataSnap, а также с общими улучшениями клиентских библиотек HTTP и REST. Мы также продолжим уделять особое внимание облачной поддержке Azure и AWS.
Стилям VCL и стилям HighDPI будет уделено дополнительное внимание, наряду с VCL в целом.
Для библиотеки FireMonkey мы продолжаем улучшать компоненты TMemo (как в платформенной, так и в стилизованной версиях), драйвер библиотеки Metal GPU, представленный в 10.4, и переделываем управление датчиками в Android, чтобы обеспечить лучшую поддержку для различных устройств Android.
RAD Studio 10.5
Для справки, вот снова основной слайд дорожной карты:
Комментарий Дэвида к планам версии 10.5
Пользовательский опыт
У нас есть ряд замечательных новых функций, которые с нетерпением ждут многие клиенты, запланированные на 10.5.
Во-первых, мы планируем полную поддержку высокого разрешения в среде IDE. VCL поддерживает высокий DPI уже в нескольких выпусках, а RAD Studio IDE, которая в основном использует VCL, теперь также будет поддерживать высокий DPI. Это гарантирует четкую визуализацию на всех современных экранах с высоким разрешением, в том числе при перемещении окон по экранам с разным разрешением и масштабом.
Конструктор форм VCL — один из ключевых инструментов, которые вы используете при создании своего приложения. Задача дизайнера состоит в том, чтобы быстро создать пользовательский интерфейс, близкий к тому, как он будет выглядеть при запуске вашего приложения, в отличие от инструментов пользовательского интерфейса, которые описывают пользовательский интерфейс только в тексте и не обеспечивают немедленной обратной связи / цикла итераций. В 10.5 мы планируем расширить этот элемент, чтобы он выглядел так, как ваше приложение будет выглядеть при запуске, добавив поддержку стиля VCL в конструктор, поэтому, когда любой из ваших элементов управления оформлен в стиле, вы также увидите его стили в дизайнере.
Конструктор форм FMX также является ключевым инструментом при создании кроссплатформенного приложения. Мы планируем использовать некоторые инструменты проектирования, которые есть у дизайнера VCL, такие как руководства по выравниванию, чтобы гарантировать, что у дизайнера есть необходимые вам функции производительности.
Мы также планируем сосредоточиться на интеграции системы управления версиями IDE, чтобы облегчить сотрудничество вашей команды. Кроме того, мы планируем некоторые улучшения представления среды IDE при первом запуске, чтобы помочь тем, кто плохо знаком с Delphi и C ++ Builder, приступить к работе.
Наконец, многие клиенты используют Delphi или C ++ Builder на выделенном сервере сборки. Наряду с контролем версий, тестированием и подобными практиками рекомендуется, чтобы официальные сборки выполнялись на определенной машине или виртуальной машине. В настоящее время для установки RAD для сервера сборки вам необходимо установить полную среду IDE, но этого не требуется, поскольку для сборки нужны только инструменты командной строки. Мы планируем сценарий установки специально для серверов сборки.
C ++ Builder
В 10.4.0 мы представили новый отладчик для C ++ Win64. Это было связано с распространенным запросом клиентов, особенно потому, что мы включили «средства форматирования», способ легко оценить содержимое контейнеров STL или любой структуры данных, включая вашу собственную. Это был совершенно новый отладчик, а не новая версия того, что мы использовали ранее. В версии 10.5 мы планируем аналогичную новую замену для другого основного инструмента — компоновщика. Как и отладчик, это будет для Win64.
Вы заметите здесь акцент на 64-битную Windows. Многие клиенты используют Clang для нацеливания на Win64, и мы хотим, чтобы наши инструменты были на уровне или лучше, чем инструменты, к которым вы, возможно, привыкли, из классического компилятора. Кроме того, многие люди начинают смотреть исключительно на 64-разрядные версии, причем 32-разрядные приложения обновляются, а новые приложения являются только 64-разрядными.
Visual Assist — это потрясающее расширение производительности для Visual C ++, обеспечивающее автозавершение кода, рефакторинг и многое другое. Мы изучаем различные способы интеграции его в C ++ Builder и планируем сделать это в версии 10.5.
Наконец, мы также планируем улучшить взаимодействие Delphi / C ++. Возможность использовать два языка — это большой рост производительности и одна из ключевых причин использовать C ++ Builder или RAD Studio, и это работа по совершенствованию этой интеграции. Это должно обеспечить более плавную интеграцию с функциями RTL.
Отладчик Delphi
В версии 10.4 мы представили совершенно новый отладчик для C ++ Win64 (отмеченный выше) на основе LLDB. В конечном итоге мы стремимся использовать один и тот же отладчик на всех платформах — сегодня мы используем сочетание разных отладчиков. Ключом к этому является добавление интерфейса языка Delphi к LLDB, который позволяет вам оценивать синтаксис Delphi, например, в диалоговом окне Evaluate / Modify. Мы планируем представить первую платформу, использующую LLDB, с этим новым интерфейсом в 10.5.
Комментарий Марко к планам 10.5
Платформы
Что касается платформы Windows, как упоминалось ранее, мы планируем предложить поддержку различных технологий, которые являются частью Microsoft Project Reunion. В частности, в выпуске RAD Studio 10.5 мы надеемся интегрировать поддержку современного Windows UX через библиотеку WinUI 3. Согласно дорожной карте Microsoft для библиотеки, должно появиться возможность использовать компоненты этой библиотеки в собственном приложении на основе классического API, смешивая формы и элементы управления разных типов. Фактические детали будут зависеть от того, что библиотека будет предоставлять с точки зрения интеграции с собственными приложениями, но наш текущий план состоит в том, чтобы интегрировать эту библиотеку в VCL с новыми конкретными элементами управления.
Говоря о платформах, мы хотим добавить новую цель для приложений Delphi: новый компилятор для ARM-версии операционной системы macOS с оборудованием Apple на базе процессоров Apple Silicon. Хотя вы можете запускать приложения Intel, цель состоит в том, чтобы иметь собственное приложение ARM для нового поколения компьютеров Mac.
Это будет существенное расширение Delphi, включая новый компилятор, обновления библиотеки времени выполнения и различных библиотек высокого уровня. Мы также планируем расширить синтаксис языка Delphi для всех платформ и повысить производительность кода математической обработки, генерируемого компилятором в Windows, что ускорит приложения при обработке чисел.
Мы также продолжим работать над общим качеством продукта и планируем выбрать несколько подсистем, на которых нужно сосредоточиться, решение, которое мы примем, оценив отзывы клиентов о текущем выпуске и будущих обновлениях.
Резюме
У нас большие планы на предстоящие выпуски Delphi, C ++ Builder и RAD Studio! От захватывающих изменений до автозавершения кода для обоих языков до IDE с высоким разрешением DPI, повышения производительности при кодировании и проектировании, пользовательского интерфейса Windows и новых компонентов VCL, отладки Delphi, поддержки Apple Silicon (M1) для Delphi, качественной работы для компиляторов, Delphi и C ++ RTL, SOAP, многоуровневый и многое другое, новый компоновщик для C ++ — предстоящие выпуски содержат некоторые действительно интересные работы. Нам не терпится доставить их вам!
Примечание. Эти планы и дорожная карта отражают наши намерения на эту дату, но наши планы развития и приоритеты могут быть изменены. Соответственно, мы не можем предложить никаких обязательств или других форм гарантий того, что мы в конечном итоге выпустим какой-либо или все описанные продукты в соответствии с графиком, описанным порядком или вообще. Эти общие указания графиков разработки или «дорожных карт продуктов» не следует интерпретировать или толковать как какие-либо обязательства, а права наших клиентов на обновления, обновления, улучшения и другие выпуски обслуживания будут изложены только в применимом лицензионном соглашении по программному обеспечению. .
von Marco Cantu, David Millington und Sarina DuPont
Einführung
Das Embarcadero RAD Studio-Produktmanagementteam aktualisiert regelmäßig die Produktentwicklungs-Roadmap für Delphi, C ++ Builder und RAD Studio. Wie Sie in unserem offiziellen Roadmap-Blogbeitrag sehen können, haben wir gerade eine neue Version der Roadmap veröffentlicht, die die wichtigsten Funktionen abdeckt, die wir für die nächsten 12 Monate geplant haben. Zusammen mit den offiziellen Roadmap-Folien wollten wir mit diesem zusätzlichen Blog-Beitrag weitere Details, Informationen und Einblicke bieten. Es kann nützlich sein, die Folien als Referenz zu öffnen, während Sie die erweiterten Informationen lesen, die wir hier bereitstellen.
In unserer Roadmap finden Sie die wichtigsten Funktionen, die wir für das Kalenderjahr 2021 geplant haben. Bevor wir zu den Details unserer aktualisierten Roadmap gelangen, möchten wir noch einmal zusammenfassen, was wir bisher geliefert haben.
Anfang dieses Jahres haben wir 10.4 Sydney veröffentlicht. Die Version 10.4 wurde von unseren Kunden sehr gut angenommen und beinhaltete die erste Lieferung unserer vollständigen Neufassung der Delphi Code Insight-Engine, die jetzt auf der Language Server-Protokollarchitektur basiert, einem neuen Debugger für C ++ Win64, der eine sehr langjährige Kundenanfrage adressierte und neue Delphi-Sprachfunktionen wie benutzerdefinierte verwaltete Datensätze. Wir haben auch den Anwendungsfall von VCL-Stilen durch die Unterstützung von HighDPI-Monitoren und das Styling pro Steuerung erheblich erweitert.
Wir haben die Version 10.4 in Sydney mit 10.4.1 im September 2020 weiterverfolgt, die sich hauptsächlich auf die Qualität und weitere Verbesserungen der in 10.4 bereitgestellten Funktionen konzentrierte, insbesondere auf die neu hinzugefügte Delphi LSP-Unterstützung. 10.4.1 enthält über 800 Qualitätsverbesserungen, einschließlich mehr als 500 Qualitätsverbesserungen für öffentlich gemeldete Probleme im Embarcadero-Qualitätsportal.
Bevor wir uns mit den Versionen 10.4.2 und 10.5 befassen, wollten wir hervorheben, dass 10.4 / 10.4.1 eine unserer beliebtesten Veröffentlichungen ist und mehr Downloads als 10.3 und 10.2 bietet. Dies ist besonders beeindruckend inmitten von COVID-19. Wir führen den Erfolg sowohl auf die Veröffentlichung als auch auf unser verstärktes Engagement mit Technologiepartnern zurück, die weiterhin mit unserem Team zusammenarbeiten, um neue Funktionen zu aktualisieren und zu entwickeln. Wir möchten uns auch bei Ihnen, unseren Kunden, für das großartige Feedback bedanken, das Sie dem Produktteam gegeben haben, in Bezug auf Funktionen, die hinzugefügt werden sollen, und Bereiche, auf die wir uns hinsichtlich der Qualität konzentrieren sollen, und für Ihre Teilnahme in unseren Beta-Programmen (was einer der Vorteile eines Abonnements ist).
Derzeit arbeiten wir an der Version 10.4.2, die für den ersten Teil des Kalenderjahres 2021 geplant ist und in der Roadmap und in diesem Kommentar-Blogbeitrag beschrieben wird. Einige Zeit vor der Veröffentlichung erwarten wir, dass RAD Studio-, Delphi- und C ++ Builder-Kunden mit einem aktiven Update-Abonnement zum Betatest für die kommende Version eingeladen werden. Die Beta 10.4.2 ist eine NDA-Beta, bei der die Teilnehmer unsere Geheimhaltungsvereinbarung unterzeichnen müssen, bevor sie an der Beta teilnehmen können. Die Möglichkeit, Betas beizutreten und frühzeitig im Entwicklungszyklus Feedback zum Produktmanagement zu geben, ist einer der Vorteile der Aktualisierung des Update-Abonnements.
Für 10.5 planen wir die Einführung einer neuen Zielplattform für Delphi, macOS ARM (basierend auf Apple Silicon-CPUs), umfangreichen Arbeiten zur Unterstützung von IDE HighDPI, C ++ – Toolchain-Erweiterungen und vielen weiteren zusätzlichen Funktionen und Qualitätsverbesserungen. Siehe unten für weitere Details.
RAD Studio Roadmap Timeline
Bevor wir zu den Details der Funktionen kommen, an denen das Entwicklungsteam heute arbeitet oder für die Zukunft recherchiert, werfen wir einen Blick auf die Zeitleiste der kommenden Versionen, wie auf der Hauptfolie der Roadmap gezeigt:
Davids Kommentar zu 10.4.2 Plänen
Delphi Code Insight
Wie Marco oben schrieb, haben wir uns in 10.4.1 stark auf Qualität konzentriert. Dies wird in 10.4.2 fortgesetzt. Ein wichtiger Bereich ist das neue Delphi Code Insight (auch bekannt als Delphi LSP). 10.4.2 enthält nicht nur viele Korrekturen und Optimierungen sowie weniger häufig verwendete Code Insight-Funktionen, die wir nicht verwendet haben enthalten in der ersten Version aber auch einige neue oder deutlich verbesserte Funktionen bei der Code-Vervollständigung. Zum Beispiel planen wir, unter anderem bei gedrückter Strg-Taste auf „geerbt“ zu klicken und die Vervollständigung in den Verwendungsklauseln zu überarbeiten.
C ++ Code Insight
C ++ setzt auch das Thema der fortlaufenden Qualitätsarbeit fort, wobei der Schwerpunkt auf zwei Bereichen liegt.
Für alle C ++ – Kunden ist eine vollständige Überarbeitung der C ++ – Code-Vervollständigung bei Verwendung der Clang-Compiler am auffälligsten. In 10.3 mussten wir beim Upgrade auf C ++ 17 die von der IDE verwendete Code-Vervollständigungstechnologie ersetzen. Seit seiner Einführung haben wir die Code-Vervollständigung in jeder Version verbessert und Anwendungsfälle behandelt, bei denen bestimmte Codemuster oder Projekt-Setups Probleme bei der Vervollständigung verursachen können.
Für 10.4.2 haben wir uns entschlossen, die Code-Vervollständigung für C ++ vollständig zu überarbeiten, um die Entwicklerproduktivität zu gewährleisten, nach der Kunden suchen. C ++ – Kunden sollten feststellen, dass die Vervollständigung und Navigation von Code zuverlässig und gut funktioniert.
Wir haben sogar einige schwierigere Fälle angesprochen, z. B. die Vervollständigung in einer Header-Datei (viel schwieriger als in einer CPP-Datei)! Das Endergebnis sollte alles sein, wonach Sie uns gefragt haben.
C ++ – Qualitäts- und Ausnahmebehandlung
Der andere wichtige Qualitätsfokus für C ++ in 10.4.2 liegt auf der Ausnahmebehandlung.
Die Ausnahmebehandlung ist ein komplexer Bereich, der eine enge Zusammenarbeit zwischen dem Compiler und RTL erfordert, um ordnungsgemäß zu funktionieren. Es gibt übliche Konventionen für Ausnahmen, z. B. niemals zuzulassen, dass eine Ausnahme eine Modulgrenze überschreitet (z. B. in einer DLL ausgelöst, aber in einer EXE-Datei abgefangen), aber diese werden manchmal aus guten Gründen nicht immer befolgt. In C ++ müssen wir C ++ – Ausnahmen, Betriebssystemausnahmen und SEH behandeln, nicht zu vergessen auch die Ausnahmebehandlung von Delphi.
In 10.4.2 haben wir das Ausnahmebehandlungssystem überarbeitet. Wenn wir uns der Veröffentlichung nähern, freuen wir uns auf einen Blog-Beitrag, in dem die von uns unterstützten Szenarien aufgeführt sind. Intern sehen wir derzeit einige große Verbesserungen!
IDE
Neben dem Fokus auf die Code-Vervollständigung für beide Sprachen sind für die IDE weitere Arbeiten für 10.4.2 geplant.
In 10.3 haben wir die beiden aktuellen Themen eingeführt, einen hellen und einen dunklen Stil (tatsächlich wurde ein dunkler Stil, obwohl er sich erheblich unterscheidet, erstmals in 10.2.3 eingeführt. Dies war eine unserer beliebtesten Funktionen.) Der helle Stil ist überwiegend hellblau. In dieser Version fügen wir einen dritten Stil hinzu, bei dem für die Hauptfarben traditionelles Grau und nicht Blau verwendet wird. Betrachten Sie es als einen Retro-Stil für diejenigen, die es mögen, wie die IDE in der 2010-XE7-Ära aussah, ein Rückruf zum klassischen Look. Wir glauben auch, dass es für diejenigen nützlich sein kann, die spezielle Sehkraftanpassungen benötigen.
Wir planen auch weitere Qualitätsarbeiten in Bezug auf Desktop-Layouts, Layouts mit mehreren Monitoren, Formularentwurf und ähnliche Bereiche. Dazu gehört, dass Sie Ihr Formular im Formular-Designer gleichzeitig mit dem Öffnen des Codes für dieses Formular entwerfen können. Aufgrund von Rückmeldungen war dies der häufigste Grund für die Verwendung des alten, nicht angedockten Designers, den wir in 10.4.1 entfernt haben, und wir freuen uns, dass Sie mit dem modernen Designer sowohl Code als auch Design in einer Formeinheit erstellen können.
Schließlich möchten wir das Migrationstool für Einstellungen verbessern, um die RAD Studio-Einstellungen von Version zu Version (z. B. von 10.3 auf 10.4) zu verschieben und Ihre Konfiguration beim Wechsel zu einer Update-Version (z. B. von 10.4.1 auf 10.4.2) besser beizubehalten ). Wir planen, für jedes Szenario spezifische voreingestellte Konfigurationen hinzuzufügen und zusätzlich zu den heute berücksichtigten Registrierungseinstellungen Konfigurationsdateien aufzunehmen.
Marcos Kommentar zu 10.4.2 Plänen
In Bezug auf die Zielbetriebssysteme konzentrieren wir uns derzeit auf die Verbesserung der vorhandenen Plattformen, die in 10.4.x unterstützt werden, und es gibt zwei Schwerpunkte, an denen wir für 10.4.2 arbeiten.
Das erste ist Teil unseres kontinuierlichen Fokus auf Windows als unser Hauptziel. Auf dem Microsoft-Betriebssystem folgen wir genau der aktuellen Microsoft-Anweisung zur Vereinheitlichung der WinRT-API und der traditionellen Win-API über Project Reunion. Project Reunion (https://github.com/microsoft/ProjectReunion) umfasst verschiedene Technologien, wobei die ersten WinUI 3, WebView2 und MSIX sind.
Das WebView2-Steuerelement ist eine neue Windows-Plattformkomponente, in die Edge Chromium eingebettet ist. Wir haben diese Funktion in 10.4 mit der VCL-Komponente TEdgeBrowser unterstützt.
Der zweite Baustein wird die Unterstützung für das MSIX-Verpackungsformat sein, das wir für 10.4.2 planen. MSIX ist der Nachfolger von APPX, einem Ziel, das wir derzeit im Rahmen der Integration von RAD Studio IDE Desktop Bridge anbieten. Es ist für den Microsoft Store und die Unternehmensbereitstellung gedacht.
Ein weiterer Bereich unserer verbesserten Unterstützung für die Windows-Plattform und insbesondere für VCL-Anwendungen ist die Hinzufügung von zwei neuen Steuerelementen, die unseren Kunden helfen sollen, die Benutzeroberfläche ihrer Anwendungen zu modernisieren und zu verbessern. Wir arbeiten an neuen VCL Native Windows-Steuerelementen, damit Sie Ihren Kunden eine modernere Benutzeroberfläche bieten können:
Eine davon ist eine hochoptimierte virtuelle Listenansicht, mit der Sie eine große Anzahl von Elementen mit einer flexiblen Kombination aus Text und Grafiken anzeigen können. Das Steuerelement basiert lose auf dem Ansatz des vorhandenen DBCtrlGrid-Steuerelements, erfordert jedoch keine Datenquelle. Es wird die Verwendung von Live-Bindungen unterstützen.
Die andere neue VCL-Komponente, die wir hinzufügen, ist ein numerisches Eingabesteuerelement, ähnlich dem WinUI NumberBox-Plattformsteuerelement. Dieses Steuerelement bietet eine einfachere und reibungslosere numerische Eingabe unter Berücksichtigung verschiedener Formate (Ganzzahlen, Gleitkommazahlen, Währungswerte) und einschließlich der Auswertung einfacher Ausdrücke.
Der zweite Teil unserer Verbesserungen konzentriert sich auf die derzeit unterstützten Zielplattformen. Wir planen die vollständige Kompatibilität mit neuen Versionen von Betriebssystemen, die von Apple und Google veröffentlicht wurden, nachdem RAD Studio 10.4.1 veröffentlicht wurde. Obwohl Sie diese Plattformen heute gezielt einsetzen können, gibt es einige offene Probleme, die wir ordnungsgemäß angehen möchten (und nicht über Problemumgehungen). Das Ziel ist die vollständige Unterstützung für:
iOS 14 und iPadOS 14 (Delphi und C ++)
macOS 11.0 Big Sur (Intel) (Delphi)
Android 11 (Delphi)
In Bezug auf die Qualität werden wir in 10.4.2 (wie in 10.4.1) weiterhin erhebliche Anstrengungen in Bezug auf Stabilität, Leistung und Qualität unternehmen. Wir planen, von Kunden gemeldete Probleme zu beheben und Eskalationen in vielen Bereichen des Produkts zu unterstützen. Zu den Tools und Bibliotheken, auf die wir uns besonders konzentrieren werden, gehören zusätzlich zu den zuvor von David aufgeführten:
Der Delphi-Compiler (für alle Plattformen) verbessert seine Robustheit und Abwärtskompatibilität, konzentriert sich jedoch besonders auf die Leistung des Compilers (und Linkers), um die Kompilierungszeit für große Projekte zu verkürzen und die LSP-Engine (die den Compiler verwendet, zu beschleunigen) den Quellcode analysieren).
Die SOAP-Clientbibliothek zusammen mit dem WSDL-Importtool, das den Clientcode generiert, der für die Schnittstelle mit SOAP-Servern verwendet wird
Die Parallel Programming Library (PPL) bietet eine hervorragende Abstraktion der verschiedenen Plattformen und Threading-Funktionen für Multi-Core-CPUs in Bezug auf Aufgaben, Zukunft und Parallel-for-Loops
Die mehrschichtigen Webdienstlösungen sind Teil von RAD Studio und bieten Verbesserungen sowohl für RAD Server als auch für die ältere DataSnap-Engine sowie allgemeine Verbesserungen für die HTTP- und REST-Clientbibliotheken. Wir werden uns auch weiterhin auf unsere Azure- und AWS-Cloud-Unterstützung konzentrieren.
VCL-Stile und HighDPI-Stile werden zusammen mit VCL im Allgemeinen besondere Aufmerksamkeit erhalten
Für die FireMonkey-Bibliothek werden die TMemo-Komponenten (sowohl in der Plattform- als auch in der gestalteten Version), der in 10.4 eingeführte Metal GPU-Bibliothekstreiber und die Überarbeitung der Sensorverwaltung unter Android weiter verbessert, um eine bessere Unterstützung für eine Vielzahl von Android-Geräten zu bieten.
RAD Studio 10.5
Als Referenz ist hier noch einmal die Haupt-Roadmap-Folie:
Davids Kommentar zu 10.5 Plänen
Benutzererfahrung
Wir haben eine Reihe großartiger neuer Funktionen, auf die sich viele Kunden freuen und die für 10.5 geplant sind.
Erstens planen wir volle Unterstützung für hohe DPI in der IDE. Die VCL unterstützt seit einigen Releases hohe DPI-Werte, und die RAD Studio-IDE, die hauptsächlich die VCL verwendet, unterstützt jetzt auch hohe DPI-Werte. Dies stellt sicher, dass es auf allen modernen hochauflösenden Bildschirmen gestochen scharf wiedergegeben wird, auch wenn Sie Fenster über Bildschirme mit unterschiedlichen Auflösungen und Maßstäben bewegen.
Der VCL-Formular-Designer ist eines der wichtigsten Tools, die Sie beim Erstellen Ihrer App verwenden. Der Designer möchte schnell eine Benutzeroberfläche erstellen, die dem Aussehen Ihrer App sehr nahe kommt, im Gegensatz zu UI-Tools, die eine Benutzeroberfläche nur in Textform beschreiben und keine unmittelbare Rückkopplungs- / Iterationsschleife bieten. In 10.5 planen wir, dieses Element so zu erweitern, dass es dem Aussehen Ihrer App beim Ausführen ähnelt, indem wir dem Designer Unterstützung im VCL-Stil hinzufügen. Wenn also eines Ihrer Steuerelemente so gestaltet ist, werden Sie es auch im Designer sehen.
Der FMX-Formulardesigner ist ebenfalls ein wichtiges Werkzeug beim Erstellen einer plattformübergreifenden Anwendung. Wir planen, einige der Design-Tools des VCL-Designers, wie z. B. Ausrichtungshilfen, zu verwenden, um sicherzustellen, dass der Designer über die Produktivitätsfunktionen verfügt, die Sie benötigen.
Wir planen außerdem, uns auf die Integration der Quellcodeverwaltung der IDE zu konzentrieren, um die Zusammenarbeit Ihres Teams zu unterstützen. Darüber hinaus planen wir einige Verbesserungen bei der Darstellung der IDE bei der ersten Ausführung, um den Neulingen in Delphi und C ++ Builder den Einstieg zu erleichtern.
Schließlich verwenden viele Kunden Delphi oder C ++ Builder auf einem dedizierten Build-Server. Neben der Quellcodeverwaltung, Tests und ähnlichen Methoden empfiehlt es sich, offizielle Builds auf einem bestimmten Computer oder einer bestimmten VM durchzuführen. Derzeit müssen Sie zur Installation von RAD für einen Build-Server die vollständige IDE installieren. Dies sollte jedoch nicht erforderlich sein, da für das Erstellen nur die Befehlszeilentools erforderlich sind. Wir planen ein Installationsszenario speziell für Build-Server.
C ++ Builder
In 10.4.0 haben wir einen neuen Debugger für C ++ Win64 eingeführt. Dies bezog sich auf eine häufige Kundenanfrage, insbesondere weil wir „Formatierer“ einbezogen haben, mit denen der Inhalt von STL-Containern oder einer beliebigen Datenstruktur, einschließlich Ihrer eigenen, einfach bewertet werden kann. Dies war ein völlig neuer Debugger, keine neue Version der zuvor verwendeten. In 10.5 planen wir einen ähnlichen neuen Ersatz für ein anderes Kernwerkzeug, den Linker. Wie der Debugger wird dies für Win64 sein.
Sie werden hier einen Fokus auf 64-Bit-Windows bemerken. Viele Kunden verwenden Clang, um auf Win64 abzuzielen, und wir möchten sicherstellen, dass unsere Werkzeuge gleich oder besser sind als die Werkzeuge, die Sie vom klassischen Compiler gewohnt sind. Darüber hinaus beschäftigen sich viele Menschen ausschließlich mit 64-Bit, wobei 32-Bit-Apps aktualisiert werden und neue Apps nur 64-Bit sind.
Visual Assist ist eine erstaunliche Produktivitätserweiterung für Visual C ++, die Code-Vervollständigung, Refactorings und mehr bietet. Wir haben nach verschiedenen Möglichkeiten gesucht, um es in C ++ Builder zu integrieren, und planen dies in 10.5.
Schließlich planen wir auch, Delphi / C ++ Interop zu verbessern. Die Verwendung von zwei Sprachen ist ein großer Produktivitätsschub und einer der Hauptgründe für die Verwendung von C ++ Builder oder RAD Studio. Dies ist eine Arbeit, um diese Integration zu verbessern. Es sollte eine reibungslosere Integration mit RTL-Funktionen ermöglichen.
Delphi Debugger
In 10.4 haben wir einen völlig neuen Debugger für C ++ Win64 (siehe oben) eingeführt, der auf LLDB basiert. Letztendlich wollen wir auf allen Plattformen denselben Debugger verwenden – heute verwenden wir eine Mischung aus verschiedenen Debuggern. Der Schlüssel dazu ist das Hinzufügen eines Delphi-Sprach-Frontends zur LLDB, mit dem Sie die Delphi-Syntax beispielsweise im Dialogfeld „Auswerten / Ändern“ auswerten können. Wir planen, die erste Plattform mit LLDB mit diesem neuen Frontend in 10.5 einzuführen.
Marcos Kommentar zu 10.5 Plänen
Plattformen
In Bezug auf die Windows-Plattform planen wir, wie bereits erwähnt, Unterstützung für die verschiedenen Technologien anzubieten, die Teil von Microsoft Project Reunion sind. Insbesondere in der Version RAD Studio 10.5 freuen wir uns darauf, die Unterstützung für modernes Windows UX über die WinUI 3-Bibliothek zu integrieren. Gemäß der Roadmap von Microsoft für die Bibliothek sollte es möglich sein, die Komponenten dieser Bibliothek in einer nativen Anwendung zu verwenden, die auf der klassischen API basiert und Formulare und Steuerelemente verschiedener Typen mischt. Die tatsächlichen Details hängen davon ab, was die Bibliothek in Bezug auf die Integration in native Anwendungen liefern wird. Derzeit planen wir jedoch, diese Bibliothek mit neuen spezifischen Steuerelementen in die VCL zu integrieren.
Apropos Plattformen: Wir möchten ein neues Ziel für Delphi-Anwendungen hinzufügen: einen neuen Compiler für die ARM-basierte Version des macOS-Betriebssystems mit Apple-Hardware, die von Apple Silicon-CPUs unterstützt wird. Während Sie Intel-Anwendungen ausführen können, besteht das Ziel darin, eine native ARM-Anwendung für die neue Generation von Macs zu haben.
Dies wird eine bedeutende Erweiterung von Delphi sein, einschließlich eines neuen Compilers, Aktualisierungen der Laufzeitbibliothek und der verschiedenen Bibliotheken auf hoher Ebene. Wir haben auch vor, die Delphi-Sprachsyntax für alle Plattformen zu erweitern und die Leistung des vom Compiler unter Windows generierten mathematischen Verarbeitungscodes zu verbessern, um Anwendungen bei der numerischen Verarbeitung zu beschleunigen.
Wir werden auch weiterhin an der Gesamtproduktqualität arbeiten und planen, einige Subsysteme auszuwählen, auf die wir uns konzentrieren möchten. Diese Entscheidung treffen wir, indem wir das Kundenfeedback für die aktuelle Version und die kommenden Updates bewerten.
Zusammenfassung
Wir haben einige großartige Pläne für die kommenden Versionen von Delphi, C ++ Builder und RAD Studio! Von aufregenden Änderungen an der Code-Vervollständigung für beide Sprachen bis hin zu einer IDE mit hoher DPI, Produktivitätsverbesserungen beim Codieren und Entwerfen, Windows-Benutzeroberfläche und neuen VCL-Komponenten, Delphi-Debugging, Apple Silicon (M1) -Unterstützung für Delphi, Qualitätsarbeit für die Compiler, Delphi und C ++ RTL, SOAP, Multi-Tier und mehr, ein neuer Linker für C ++ – die kommenden Versionen enthalten einige wirklich aufregende Arbeiten. Wir können es kaum erwarten, sie zu Ihnen zu bringen!
Hinweis: Diese Pläne und Roadmap stellen unsere Absichten zum jetzigen Zeitpunkt dar, aber unsere Entwicklungspläne und Prioritäten können sich ändern. Dementsprechend können wir keine Zusagen oder andere Formen der Zusicherung geben, dass wir letztendlich einige oder alle der beschriebenen Produkte im Zeitplan oder in der beschriebenen Reihenfolge oder überhaupt freigeben. Diese allgemeinen Hinweise auf Entwicklungspläne oder „Produkt-Roadmaps“ sollten nicht als Verpflichtung interpretiert oder ausgelegt werden, und die Rechte unserer Kunden auf Upgrades, Updates, Verbesserungen und andere Wartungsversionen werden nur in der geltenden Softwarelizenzvereinbarung festgelegt .
A equipe de gerenciamento do produto Embarcadero RAD Studio atualiza regularmente o roteiro de desenvolvimento de produto para Delphi, C ++ Builder e RAD Studio. Como você pode ver em nossa postagem do blog oficial do roteiro, acabamos de lançar uma nova versão do roteiro, cobrindo os principais recursos que planejamos para os próximos 12 meses. Junto com os slides do roteiro oficial, queríamos oferecer mais detalhes, informações e percepções com esta postagem de blog adicional. Pode ser útil abrir os slides para referência enquanto lê as informações expandidas que fornecemos aqui.
Em nosso roteiro, você pode encontrar os principais recursos que planejamos para o ano civil de 2021. Antes de entrarmos nos detalhes de nosso roteiro atualizado, queremos recapitular o que entregamos até agora.
No início deste ano, lançamos 10.4 Sydney. O lançamento 10.4 foi muito bem recebido por nossos clientes e incluiu a primeira entrega de nossa reescrita completa do mecanismo Delphi Code Insight, agora baseado na arquitetura de protocolo Language Server, um novo depurador para C ++ Win64, que atendeu a uma solicitação de cliente muito antiga e novos recursos de linguagem Delphi, como registros gerenciados personalizados. Também expandimos significativamente o caso de uso de estilos VCL com suporte para monitores HighDPI e estilo por controle.
Demos seguimento à versão 10.4 de Sydney com a versão 10.4.1 em setembro de 2020, que se concentrou principalmente na qualidade e nas melhorias adicionais dos recursos fornecidos na 10.4, em particular o suporte Delphi LSP recém-adicionado. 10.4.1 inclui mais de 800 melhorias de qualidade, incluindo mais de 500 melhorias de qualidade para problemas relatados publicamente no Portal da Qualidade da Embarcadero.
Antes de entrarmos em detalhes sobre as versões 10.4.2 e 10.5, gostaríamos de destacar que a 10.4 / 10.4.1 foi uma das versões mais populares até o momento, com mais downloads em comparação com a 10.3 e a 10.2. Isso é especialmente impressionante no meio do COVID-19. Atribuímos o sucesso tanto ao lançamento quanto ao nosso maior engajamento com parceiros de tecnologia que continuam a colaborar com nossa equipe para atualizar e desenvolver novos recursos. Também queremos agradecer a vocês, nossos clientes, pelo ótimo feedback que vocês têm fornecido à equipe de produto, em termos de recursos que gostariam de ver adicionados e áreas nas quais gostaria que nos concentrássemos em termos de qualidade, e sua participação em nossos programas beta (que é um dos benefícios de ter uma assinatura).
Atualmente, estamos trabalhando na versão 10.4.2, planejada para a primeira parte do ano civil de 2021 e detalhada no roteiro e nesta postagem de blog de comentários. Algum tempo antes do lançamento, esperamos convidar os clientes RAD Studio, Delphi e C ++ Builder com uma assinatura de atualização ativa para ingressar no teste beta para o próximo lançamento. O beta 10.4.2 será um beta NDA, exigindo que os participantes assinem nosso acordo de não divulgação antes de poderem participar do beta. Poder ingressar nos betas e participar do fornecimento de feedback para o gerenciamento de produtos no início do ciclo de desenvolvimento é um dos benefícios de estar atualizado com a assinatura de atualização.
Para o 10.5, planejamos introduzir uma nova plataforma de destino para Delphi, macOS ARM (baseado em CPUs Apple Silicon), trabalho significativo em torno do suporte IDE HighDPI, extensões de cadeia de ferramentas C ++ e muitos outros recursos adicionais e melhorias de qualidade. Veja abaixo para mais detalhes.
RAD Studio Roadmap Timeline
Antes de entrarmos nos detalhes dos recursos nos quais a equipe de desenvolvimento está trabalhando hoje ou pesquisando para o futuro, vamos dar uma olhada na linha do tempo dos próximos lançamentos, conforme mostrado no slide principal do roteiro:
Comentário de David sobre 10.4.2 Planos
Delphi Code Insight
Como Marco escreveu acima, em 10.4.1 nos concentramos muito na qualidade. Isso continua em 10.4.2, com uma área de trabalho significativo sendo o novo Delphi Code Insight (também conhecido como Delphi LSP). O 10.4.2 não apenas incluirá muitas correções e ajustes, como também recursos de insight de código menos usados que não incluímos incluem na versão inicial, mas também alguns recursos novos ou significativamente aprimorados no autocompletar de código. Por exemplo, planejamos adicionar ctrl-click em ‘herdado’, bem como preenchimento retrabalhado em cláusulas ‘usa’, entre outras.
C ++ Code Insight
C ++ também continua o tema do trabalho de qualidade contínuo, com foco em duas áreas.
O mais notável para todos os clientes C ++ será uma revisão completa do auto-completar de código C ++ ao usar os compiladores Clang. No 10.3, quando atualizamos para C ++ 17, tivemos que substituir a tecnologia de autocompletar de código que o IDE usava. Desde a sua introdução, temos aprimorado o preenchimento de código em cada versão, abordando casos de uso em que padrões de código ou configurações de projeto específicos podem causar problemas de conclusão.
Para 10.4.2, decidimos realizar uma revisão completa do preenchimento de código para C ++, para fornecer a produtividade do desenvolvedor que os clientes estão procurando. Os clientes de C ++ devem descobrir que o preenchimento de código e a navegação funcionam bem e de maneira confiável.
Já abordamos alguns casos mais difíceis, como fornecer preenchimento em um arquivo de cabeçalho (muito mais difícil do que em um arquivo .cpp)! O resultado final deve ser tudo o que você nos pediu.
Qualidade C ++ e tratamento de exceções
O outro foco de qualidade significativo para C ++ em 10.4.2 é em torno do tratamento de exceções.
O tratamento de exceções é uma área complexa, que requer interoperabilidade estreita entre o compilador e o RTL para funcionar corretamente. Existem convenções comuns para exceções, como nunca permitir que uma exceção ultrapasse o limite de um módulo (por exemplo, ser lançada em uma DLL, mas capturada em um EXE), mas nem sempre são seguidas, às vezes por bons motivos. Em C ++, precisamos lidar com exceções C ++, exceções do sistema operacional e SEH, não esquecendo o tratamento de exceções do Delphi também.
Em 10.4.2, revisamos o sistema de tratamento de exceções. À medida que nos aproximamos do lançamento, esperamos uma postagem no blog detalhando os cenários que oferecemos. Internamente, estamos vendo algumas grandes melhorias atualmente!
IDE
Além do foco no autocompletar de código para ambas as linguagens, o IDE tem algum outro trabalho planejado para 10.4.2.
No 10.3, introduzimos os dois temas atuais, um estilo claro e escuro (na verdade, um estilo escuro, embora significativamente diferente, foi introduzido pela primeira vez em 10.2.3. Era um de nossos recursos mais populares.) O estilo claro é predominantemente azul pálido. Nesta versão, estamos adicionando um terceiro estilo, que usa cinza tradicional e não azul para as cores principais. Considere um estilo retro para quem gosta de como o IDE parecia na era 2010-XE7, um retorno ao visual clássico. Também acreditamos que pode ser útil para aqueles que precisam de acomodações especiais para a visão.
Também planejamos mais trabalho de qualidade em torno de layouts de desktop, layouts de vários monitores, design de formulários e áreas semelhantes. Isso inclui permitir que você projete seu formulário no designer de formulário ao mesmo tempo em que abre o código desse formulário. De acordo com o feedback, este foi o motivo mais comum para usar o antigo designer desencaixado, que removemos em 10.4.1, e estamos felizes em permitir que você codifique e projete em uma unidade de formulário usando o designer moderno.
Finalmente, queremos aprimorar a ferramenta de migração de configurações, para ajudar a mover as configurações do RAD Studio de uma versão para outra (como de 10.3 para 10.4) e para melhor preservar sua configuração ao mudar para uma versão de atualização (como de 10.4.1 para 10.4.2 ) Planejamos adicionar configurações predefinidas específicas para cada cenário e incluir arquivos de configuração, além das configurações de registro consideradas hoje.
Comentário de Marco sobre 10.4.2 Planos
Em termos de sistemas operacionais de destino, estamos atualmente focados em melhorar as plataformas existentes suportadas em 10.4.x, e há duas áreas de foco nas quais estamos trabalhando para 10.4.2.
O primeiro é parte de nosso foco contínuo no Windows como nosso alvo principal. No sistema operacional da Microsoft, estamos seguindo de perto a direção atual da Microsoft de unificar a API WinRT e a API Win tradicional, por meio do Project Reunion. O Project Reunion (https://github.com/microsoft/ProjectReunion) compreende diferentes tecnologias, sendo as iniciais WinUI 3, WebView2 e MSIX.
O controle WebView2 é um novo componente da plataforma Windows que incorpora o Edge Chromium. Fornecemos suporte para esse recurso na versão 10.4 com o componente TEdgeBrowser VCL.
O segundo bloco de construção será o suporte para o formato de pacote MSIX que planejamos para 10.4.2. MSIX é o sucessor do APPX, um destino que oferecemos atualmente como parte da integração do RAD Studio IDE Desktop Bridge, e se destina à Microsoft Store e à implantação empresarial.
Outra área de nosso suporte aprimorado para a plataforma Windows, e em particular para aplicativos VCL, é a adição de dois novos controles que visam ajudar nossos clientes a modernizar e melhorar a UX de seus aplicativos. Estamos trabalhando em novos controles nativos do Windows VCL, para que você possa fornecer uma interface de usuário mais moderna para seus clientes:
Um deles é uma visualização de lista virtual altamente otimizada, que permitirá a você exibir um grande número de itens com uma combinação flexível de texto e gráficos. O controle será vagamente baseado na abordagem do controle DBCtrlGrid existente, mas sem exigir uma fonte de dados. Ele suportará o uso de Live Bindings.
O outro novo componente da VCL que estamos adicionando é um controle de entrada numérica, semelhante ao controle da plataforma WinUI NumberBox. Este controle fornece uma entrada numérica mais fácil e suave, levando em consideração diferentes formatos (inteiros, números de ponto flutuante, valores monetários) e também incluindo a avaliação de expressões simples.
A segunda parte de nossas melhorias concentra-se nas plataformas de destino atualmente suportadas. Planejamos oferecer compatibilidade total com novas versões de sistemas operacionais lançados pela Apple e Google após o lançamento do RAD Studio 10.4.1. Embora você possa direcionar essas plataformas hoje, existem alguns problemas em aberto que queremos abordar adequadamente (em vez de por meio de soluções alternativas). O objetivo é ter suporte completo para:
iOS 14 e iPadOS 14 (Delphi e C ++)
macOS 11.0 Big Sur (Intel) (Delphi)
Android 11 (Delphi)
Em termos de qualidade, continuaremos com um esforço significativo de estabilidade, desempenho e qualidade em 10.4.2 (como fizemos em 10.4.1). Planejamos resolver os problemas relatados pelos clientes e dar suporte ao escalonamento em muitas áreas do produto. As ferramentas e bibliotecas nas quais iremos nos concentrar particularmente, além das listadas por David anteriormente, incluem:
O compilador Delphi (para todas as plataformas) para melhorar sua robustez e compatibilidade com versões anteriores, mas um foco particular e profundo no desempenho do compilador (e do vinculador) para reduzir o tempo de compilação para grandes projetos e também acelerar o mecanismo LSP (que usa o compilador para analisar o código-fonte).
A biblioteca do cliente SOAP junto com a ferramenta de importação WSDL que gera o código do cliente usado para fazer interface com os servidores SOAP
A Biblioteca de Programação Paralela (PPL), que oferece uma grande abstração das diferentes plataformas e capacidades de threading de CPUs multi-core, em termos de tarefas, futuros e loops paralelos
As soluções de serviço da web multicamadas parte do RAD Studio, com melhorias no servidor RAD e no mecanismo DataSnap mais antigo, e melhorias gerais nas bibliotecas de cliente HTTP e REST. Também continuaremos nos concentrando em nosso suporte à nuvem Azure e AWS.
Estilos VCL e estilos HighDPI vão receber atenção extra, junto com VCL em geral
Para a biblioteca FireMonkey, continuamos aprimorando os componentes do TMemo (na plataforma e nas versões estilizadas), o driver de biblioteca Metal GPU introduzido no 10.4 e retrabalho o gerenciamento do sensor no Android, para oferecer melhor suporte para uma variedade de dispositivos Android.
RAD Studio 10.5
Para referência, aqui está o slide do roteiro principal novamente:
Comentário de David sobre 10.5 Planos
Experiência de usuário
Temos uma série de novos recursos excelentes, esperados por muitos clientes, planejados para 10.5.
Em primeiro lugar, planejamos suporte total de alta DPI no IDE. O VCL oferece suporte a DPI alto há algumas versões agora, e o RAD Studio IDE, que usa principalmente o VCL, agora também oferece suporte a DPI alto. Isso garante que a renderização será nítida em todas as telas modernas de alta resolução, inclusive conforme você move as janelas nas telas com diferentes resoluções e escalas.
O designer de formulário VCL é uma das principais ferramentas que você usa ao criar seu aplicativo. O objetivo do designer é construir rapidamente uma IU, vendo de perto como ela ficará quando seu aplicativo for executado, em contraste com as ferramentas de IU que descrevem uma IU apenas em texto e não fornecem feedback imediato / loop de iteração. Na versão 10.5, planejamos estender esse elemento de aparência semelhante à aparência de seu aplicativo quando executado adicionando suporte ao estilo VCL para o designer, de modo que quando qualquer um de seus controles for estilizado, você os verá estilizados no designer também.
O designer de formulário FMX é similarmente uma ferramenta importante quando você constrói um aplicativo de plataforma cruzada. Planejamos trazer algumas das ferramentas de design que o designer VCL possui, como guias de alinhamento, para garantir que o designer tenha os recursos de produtividade de que você precisa.
Também planejamos nos concentrar na integração do controle de origem do IDE, para auxiliar a colaboração de sua equipe. Além disso, planejamos algumas melhorias em como o IDE é apresentado quando executado pela primeira vez, para ajudar aqueles que são novos no Delphi e no C ++ Builder para começar.
Finalmente, muitos clientes usam Delphi ou C ++ Builder em um servidor de compilação dedicado. Junto com o controle de origem, testes e práticas semelhantes, é uma boa prática ter compilações oficiais em uma máquina ou VM específica. Atualmente, para instalar o RAD para um servidor de construção, você precisa instalar o IDE completo – mas isso não deve ser necessário, porque a construção só precisa das ferramentas de linha de comando. Planejamos um cenário de instalação especificamente para construir servidores.
C ++ Builder
Em 10.4.0, introduzimos um novo depurador para C ++ Win64. Isso atendeu a uma solicitação comum do cliente, especialmente porque incluímos ‘formatadores’, uma maneira de avaliar facilmente o conteúdo de contêineres STL ou qualquer estrutura de dados, incluindo a sua. Este era um depurador completamente novo, não uma nova versão do que usávamos anteriormente. No 10.5, planejamos uma nova substituição semelhante para outra ferramenta principal, o vinculador. Como o depurador, será para Win64.
Você notará um foco no Windows de 64 bits aqui. Muitos clientes estão usando o Clang para direcionar o Win64 e queremos garantir que nossas ferramentas sejam iguais ou melhores do que as ferramentas com as quais você está acostumado no compilador clássico. Além disso, muitas pessoas estão começando a olhar exclusivamente para 64 bits, com aplicativos de 32 bits sendo atualizados e novos aplicativos sendo apenas de 64 bits.
Visual Assist é uma extensão de produtividade incrível para Visual C ++, fornecendo autocompletar código, refatorações e muito mais. Estamos pesquisando várias maneiras de integrá-lo ao C ++ Builder e planejamos fazer isso no 10.5.
Finalmente, também planejamos melhorar a interoperabilidade Delphi / C ++. Ser capaz de usar duas linguagens é um grande aumento de produtividade e um dos principais motivos para usar C ++ Builder ou RAD Studio, e isso é um trabalho para aprimorar essa integração. Deve fornecer integração mais suave com os recursos RTL.
Delphi Debugger
Na versão 10.4, introduzimos um depurador inteiramente novo para C ++ Win64 (mencionado acima) baseado em LLDB. Em última análise, pretendemos usar o mesmo depurador em todas as plataformas – hoje usamos uma mistura de depuradores diferentes. A chave para isso é adicionar um frontend de linguagem Delphi ao LLDB, que permite avaliar a sintaxe Delphi, por exemplo, na caixa de diálogo Avaliar / Modificar. Planejamos introduzir a primeira plataforma usando LLDB com este novo front-end no 10.5.
Comentário de Marco sobre 10.5 Planos
Plataformas
Em relação à plataforma Windows, conforme mencionado anteriormente, planejamos oferecer suporte para as diversas tecnologias que fazem parte do Microsoft Project Reunion. Em particular, na versão RAD Studio 10.5, estamos ansiosos para integrar o suporte para Windows UX moderno por meio da biblioteca WinUI 3. De acordo com o roadmap da Microsoft para a biblioteca, deve ser possível utilizar os componentes desta biblioteca em uma aplicação nativa baseada na API clássica, misturando formas e controles de diferentes tipos. Os detalhes reais dependerão do que a biblioteca fornecerá em termos de integração com aplicativos nativos, mas nosso plano atual é integrar esta biblioteca na VCL com novos controles específicos.
Falando em plataformas, queremos adicionar um novo alvo para os aplicativos Delphi: um novo compilador para a versão baseada em ARM do sistema operacional macOS com hardware Apple equipado com CPUs Apple Silicon. Embora você possa executar aplicativos Intel, o objetivo é ter um aplicativo ARM nativo para a nova geração de Macs.
Esta será uma extensão significativa do Delphi, incluindo um novo compilador, atualizações para a biblioteca de tempo de execução e as várias bibliotecas de alto nível. Também planejamos expandir a sintaxe da linguagem Delphi para todas as plataformas e melhorar o desempenho do código de processamento matemático que o compilador gera no Windows, tornando os aplicativos mais rápidos no processamento numérico.
Também continuaremos a trabalhar na qualidade geral do produto e planejamos selecionar alguns subsistemas nos quais focar, uma decisão que tomaremos avaliando o feedback dos clientes sobre a versão atual e as próximas atualizações.
Resumo
Temos ótimos planos para os próximos lançamentos de Delphi, C ++ Builder e RAD Studio! De mudanças emocionantes à conclusão de código para ambas as linguagens, para um IDE de alto DPI, melhorias de produtividade ao codificar e projetar, IU do Windows e novos componentes VCL, depuração Delphi, suporte Apple Silicon (M1) para Delphi, trabalho de qualidade para os compiladores, Delphi e C ++ RTL, SOAP, multicamadas e mais, um novo vinculador para C ++ – os próximos lançamentos contêm trabalhos realmente interessantes. Mal podemos esperar para levá-los até você!
Nota: Esses planos e roteiro representam nossas intenções nesta data, mas nossos planos e prioridades de desenvolvimento estão sujeitos a alterações. Conseqüentemente, não podemos oferecer nenhum compromisso ou outras formas de garantia de que, no final das contas, lançaremos um ou todos os produtos descritos na programação ou na ordem descrita, ou em todos. Essas indicações gerais de cronogramas de desenvolvimento ou “roteiros de produtos” não devem ser interpretadas ou interpretadas como qualquer forma de compromisso, e os direitos de nossos clientes a upgrades, atualizações, melhorias e outras versões de manutenção serão estabelecidos apenas no contrato de licença de software aplicável .
El equipo de administración de productos de Embarcadero RAD Studio actualiza periódicamente la hoja de ruta de desarrollo de productos para Delphi, C ++ Builder y RAD Studio. Como puede ver en nuestra publicación de blog sobre la hoja de ruta oficial, acabamos de lanzar una nueva versión de la hoja de ruta, que cubre las funciones clave que hemos planeado para los próximos 12 meses. Junto con las diapositivas oficiales de la hoja de ruta, queríamos ofrecer más detalles, información y conocimientos con esta publicación de blog adicional. Puede resultarle útil abrir las diapositivas como referencia mientras lee la información ampliada que proporcionamos aquí.
En nuestra hoja de ruta, puede encontrar las funciones clave que hemos planeado para el año calendario 2021. Antes de llegar a los detalles de nuestra hoja de ruta actualizada, queremos recapitular lo que hemos entregado hasta ahora.
A principios de este año, lanzamos 10.4 Sydney. La versión 10.4 fue muy bien recibida por nuestros clientes e incluyó la primera entrega de nuestra reescritura completa del motor Delphi Code Insight, ahora basado en la arquitectura del protocolo Language Server, un nuevo depurador para C ++ Win64, que abordó una solicitud de un cliente de larga data. y nuevas características del lenguaje Delphi, como registros administrados personalizados. También ampliamos significativamente el caso de uso de los estilos VCL con soporte para monitores HighDPI y estilo por control.
Hicimos un seguimiento de la versión 10.4 de Sydney con 10.4.1 en septiembre de 2020, que se centró principalmente en la calidad y mejoras adicionales de las funciones entregadas en 10.4, en particular, la compatibilidad con Delphi LSP recién agregada. 10.4.1 incluye más de 800 mejoras de calidad, incluidas más de 500 mejoras de calidad para problemas informados públicamente en el portal de calidad de Embarcadero.
Antes de entrar en detalles sobre las versiones 10.4.2 y 10.5, queríamos resaltar que 10.4 / 10.4.1 ha sido una de nuestras versiones más populares hasta la fecha, con más descargas en comparación con 10.3 y 10.2. Esto es especialmente impresionante en medio de COVID-19. Atribuimos el éxito tanto al lanzamiento como a nuestro mayor compromiso con los socios tecnológicos que continúan colaborando con nuestro equipo para actualizar y desarrollar nuevas funciones. También queremos agradecerles a ustedes, nuestros clientes, por los excelentes comentarios que han brindado al equipo de productos, en términos de características que le gustaría ver agregadas y áreas en las que le gustaría que nos enfocamos en calidad y su participación. en nuestros programas beta (que es uno de los beneficios de estar suscrito).
Actualmente, estamos trabajando en la versión 10.4.2, planificada para la primera parte del año calendario 2021 y detallada en la hoja de ruta y en esta publicación de blog de comentarios. En algún momento antes del lanzamiento, esperamos invitar a los clientes de RAD Studio, Delphi y C ++ Builder con una suscripción de actualización activa a unirse a las pruebas beta para el próximo lanzamiento. La versión beta 10.4.2 será una NDA beta, que requiere que los participantes firmen nuestro acuerdo de no divulgación antes de poder participar en la versión beta. Ser capaz de unirse a las versiones beta y participar en el suministro de comentarios a la gestión de productos al principio del ciclo de desarrollo es uno de los beneficios de estar al día con la suscripción de actualizaciones.
Para 10.5, planeamos presentar una nueva plataforma de destino para Delphi, macOS ARM (basado en CPU de Apple Silicon), trabajo significativo en torno al soporte IDE HighDPI, extensiones de cadena de herramientas C ++ y muchas otras características adicionales y mejoras de calidad. Consulte a continuación para obtener más detalles.
Cronología de la hoja de ruta de RAD Studio
Antes de llegar a los detalles de las funciones en las que el equipo de desarrollo está trabajando hoy o investigando para el futuro, echemos un vistazo a la línea de tiempo de los próximos lanzamientos como se muestra en la diapositiva principal de la hoja de ruta:
Comentario de David de los planes 10.4.2
Información del código Delphi
Como escribió Marco anteriormente, en 10.4.1 nos centramos mucho en la calidad. Eso continúa en 10.4.2, con un área de trabajo significativo que es el nuevo Delphi Code Insight (también conocido como Delphi LSP). No solo 10.4.2 incluirá muchas correcciones y ajustes, así como características de información de código menos utilizadas que no incluimos. incluir en la versión inicial, pero también algunas características nuevas o significativamente mejoradas en la finalización del código. Por ejemplo, planeamos agregar ctrl-clic en “heredado”, así como la finalización modificada en las cláusulas de “usos”, entre otras.
Información sobre el código C ++
C ++ también continúa el tema del trabajo continuo de calidad, con un enfoque en dos áreas.
Lo más notorio para todos los clientes de C ++ será una revisión completa de la finalización del código C ++ al usar los compiladores de Clang. En 10.3, cuando actualizamos a C ++ 17, tuvimos que reemplazar la tecnología de finalización de código que usaba el IDE. Desde su introducción, hemos mejorado la finalización del código en cada versión, abordando casos de uso en los que patrones de código específicos o configuraciones de proyectos podrían causar problemas de finalización.
Para 10.4.2, decidimos llevar a cabo una revisión completa de la finalización del código para C ++, para proporcionar la productividad de desarrollador que los clientes buscan. Los clientes de C ++ deberían encontrar que la finalización de código y la navegación funcionan bien y de manera confiable.
Incluso hemos abordado algunos casos más difíciles, como proporcionar la finalización en un archivo de encabezado (mucho más difícil que en un archivo .cpp). El resultado final debe ser todo lo que nos ha pedido.
Manejo de excepciones y calidad de C ++
El otro enfoque de calidad significativo para C ++ en 10.4.2 es el manejo de excepciones.
El manejo de excepciones es un área compleja que requiere una estrecha interoperabilidad entre el compilador y RTL para funcionar correctamente. Existen convenciones comunes para las excepciones, como nunca permitir que una excepción cruce el límite de un módulo (por ejemplo, ser lanzada en una DLL pero atrapada en un EXE), pero estas no siempre se siguen, a veces por buenas razones. En C ++, necesitamos manejar las excepciones de C ++, las excepciones del sistema operativo y SEH, sin olvidar también el manejo de excepciones de Delphi.
En 10.4.2, hemos revisado el sistema de manejo de excepciones. A medida que nos acercamos al lanzamiento, esperamos una publicación de blog que detalle los escenarios que admitimos. Internamente, ¡estamos viendo algunas grandes mejoras actualmente!
IDE
Además del enfoque en la finalización del código para ambos lenguajes, el IDE tiene algún otro trabajo planeado para 10.4.2.
En 10.3, presentamos los dos temas actuales, un estilo claro y oscuro (de hecho, un estilo oscuro, aunque significativamente diferente, se introdujo por primera vez en 10.2.3. Era una de nuestras características más populares). El estilo claro es predominantemente azul pálido. En esta versión, agregamos un tercer estilo, uno que utiliza el gris tradicional, no el azul, para los colores principales. Considérelo un estilo retro para aquellos a quienes les gusta cómo se veía el IDE en la era 2010-XE7, una devolución de llamada al aspecto clásico. También creemos que puede ser útil para quienes requieren adaptaciones especiales para la vista.
También planificamos un trabajo de calidad adicional en torno a diseños de escritorio, diseños de varios monitores, diseño de formularios y áreas similares. Esto incluye permitirle diseñar su formulario en el diseñador de formularios al mismo tiempo que tiene abierto el código para ese formulario. Según los comentarios, esta fue la razón más común para usar el antiguo diseñador desacoplado, que eliminamos en 10.4.1, y nos complace permitirle codificar y diseñar en una unidad de formulario utilizando el diseñador moderno.
Finalmente, queremos mejorar la herramienta de migración de configuraciones, para ayudar a mover la configuración de RAD Studio de una versión a otra (como de 10.3 a 10.4) y para preservar mejor su configuración al pasar a una versión de actualización (como de 10.4.1 a 10.4.2 ). Planeamos agregar configuraciones preestablecidas específicas para cada escenario e incluir archivos de configuración además de las configuraciones de registro consideradas hoy.
Comentario de Marco de los planes 10.4.2
En términos de sistemas operativos objetivo, actualmente estamos enfocados en mejorar las plataformas existentes compatibles con 10.4.xy hay dos áreas de enfoque en las que estamos trabajando para 10.4.2.
El primero es parte de nuestro enfoque continuo en Windows como nuestro objetivo principal. En el sistema operativo de Microsoft, estamos siguiendo de cerca la dirección actual de Microsoft de unificar la API WinRT y la API Win tradicional, a través de Project Reunion. Project Reunion (https://github.com/microsoft/ProjectReunion) comprende diferentes tecnologías, siendo las iniciales WinUI 3, WebView2 y MSIX.
El control WebView2 es un nuevo componente de la plataforma de Windows que incorpora Edge Chromium. Hemos brindado soporte para esta función en 10.4 con el componente TEdgeBrowser VCL.
El segundo bloque de construcción será el soporte para el formato de empaquetado MSIX que estamos planeando para 10.4.2. MSIX es el sucesor de APPX, un objetivo que ofrecemos actualmente como parte de la integración de RAD Studio IDE Desktop Bridge, y está diseñado para Microsoft Store y para implementación empresarial.
Otra área de nuestro soporte mejorado para la plataforma Windows, y en particular para las aplicaciones VCL, es la adición de dos nuevos controles destinados a ayudar a nuestros clientes a modernizar y mejorar la UX de sus aplicaciones. Estamos trabajando en nuevos controles nativos de Windows VCL, para que pueda proporcionar una interfaz de usuario más moderna a sus clientes:
Una es una vista de lista virtual altamente optimizada, que le permitirá mostrar una gran cantidad de elementos con una combinación flexible de texto y gráficos. El control se basará libremente en el enfoque del control DBCtrlGrid existente, pero sin requerir una fuente de datos. Admitirá el uso de Live Bindings.
El otro componente nuevo de VCL que estamos agregando es un control de entrada numérico, similar al control de plataforma WinUI NumberBox. Este control proporciona una entrada numérica más fácil y fluida, teniendo en cuenta diferentes formatos (enteros, números de coma flotante, valores de moneda) y también incluye la evaluación de expresiones simples.
La segunda parte de nuestras mejoras se centra en las plataformas de destino compatibles actualmente. Planeamos ofrecer compatibilidad total con las nuevas versiones de los sistemas operativos lanzados por Apple y Google después de que saliera RAD Studio 10.4.1. Si bien puede apuntar a estas plataformas hoy, hay algunos problemas abiertos que queremos abordar de manera adecuada (en lugar de soluciones alternativas). El objetivo es tener un soporte completo para:
iOS 14 y iPadOS 14 (Delphi y C ++)
macOS 11.0 Big Sur (Intel) (Delphi)
Android 11 (Delphi)
En términos de calidad, continuaremos con un esfuerzo significativo en estabilidad, rendimiento y calidad en 10.4.2 (como hicimos en 10.4.1). Planeamos abordar los problemas reportados por los clientes y las escaladas de soporte en muchas áreas del producto. Las herramientas y bibliotecas en las que nos centraremos especialmente, además de las enumeradas anteriormente por David, incluyen:
El compilador Delphi (para todas las plataformas) para mejorar su robustez y compatibilidad con versiones anteriores, pero un enfoque particular y profundo en el rendimiento del compilador (y enlazador) para reducir el tiempo de compilación para proyectos grandes y también acelerar el motor LSP (que utiliza el compilador para analizar el código fuente).
La biblioteca del cliente SOAP junto con la herramienta de importación WSDL que genera el código del cliente que se utiliza para interactuar con los servidores SOAP.
La biblioteca de programación paralela (PPL), que ofrece una gran abstracción de las diferentes plataformas y capacidades de subprocesamiento de CPU de múltiples núcleos, en términos de tareas, futuros y bucles for paralelos
Las soluciones de servicios web de varios niveles forman parte de RAD Studio, con mejoras tanto en RAD Server como en el motor DataSnap más antiguo, y mejoras generales en las bibliotecas cliente HTTP y REST. También vamos a seguir centrándonos en nuestro soporte en la nube de Azure y AWS.
Los estilos VCL y los estilos HighDPI recibirán una atención especial, junto con VCL en general.
Para la biblioteca FireMonkey, continuamos mejorando los componentes TMemo (tanto en la plataforma como en las versiones con estilo), el controlador de la biblioteca Metal GPU introducido en 10.4 y la gestión de sensores de reelaboración en Android, para ofrecer un mejor soporte para una variedad de dispositivos Android.
RAD Studio 10.5
Como referencia, aquí está nuevamente la diapositiva principal de la hoja de ruta:
Comentario de David de los planes 10.5
Experiencia de usuario
Tenemos una serie de excelentes funciones nuevas, que muchos clientes esperan con ansias, planificadas para 10.5.
En primer lugar, planificamos el soporte completo de DPI alto en el IDE. El VCL ha admitido un DPI alto durante un par de versiones, y el IDE de RAD Studio, que utiliza principalmente el VCL, ahora también admitirá un DPI alto. Esto asegura que se renderizará nítidamente en todas las pantallas modernas de alta resolución, incluso cuando mueva ventanas a través de pantallas con diferentes resoluciones y escalas.
El diseñador de formularios VCL es una de las herramientas clave que utiliza al crear su aplicación. El objetivo del diseñador es crear rápidamente una interfaz de usuario viendo de cerca cómo se verá cuando se ejecute su aplicación, en contraste con las herramientas de interfaz de usuario que describen una interfaz de usuario solo en texto y no proporcionan un bucle de iteración / retroalimentación inmediata. En la versión 10.5, planeamos extender ese elemento de apariencia similar a cómo se verá su aplicación cuando se ejecute agregando soporte de estilo VCL al diseñador, de modo que cuando cualquiera de sus controles tenga estilo, también los verá en el diseñador.
El diseñador de formularios FMX es igualmente una herramienta clave cuando se crea una aplicación multiplataforma. Planeamos incorporar algunas de las herramientas de diseño que tiene el diseñador de VCL, como guías de alineación, para garantizar que el diseñador tenga las características de productividad que necesita.
También planeamos enfocarnos en la integración del control de fuente del IDE, para ayudar a la colaboración de su equipo. Además, planeamos algunas mejoras en la forma en que se presenta el IDE cuando se ejecuta por primera vez, para ayudar a los nuevos en Delphi y C ++ Builder a comenzar.
Finalmente, muchos clientes usan Delphi o C ++ Builder en un servidor de compilación dedicado. Junto con el control de código fuente, las pruebas y prácticas similares, es una buena práctica que las compilaciones oficiales se realicen en una máquina o VM específica. Actualmente, para instalar RAD para un servidor de compilación, debe instalar el IDE completo, pero esto no debería ser necesario, porque la compilación solo necesita las herramientas de línea de comandos. Planeamos un escenario de instalador específicamente para servidores de compilación.
Constructor de C ++
En 10.4.0, presentamos un nuevo depurador para C ++ Win64. Esto resolvió una solicitud común de los clientes, especialmente porque incluimos “formateadores”, una forma de evaluar fácilmente el contenido de los contenedores STL o cualquier estructura de datos, incluida la suya. Este era un depurador completamente nuevo, no una nueva versión del que usamos anteriormente. En la versión 10.5, planeamos un reemplazo nuevo similar para otra herramienta principal, el enlazador. Al igual que el depurador, será para Win64.
Notará un enfoque en Windows de 64 bits aquí. Muchos clientes están usando Clang para apuntar a Win64, y queremos asegurarnos de que nuestras herramientas estén a la par o sean mejores que las herramientas a las que puede estar acostumbrado del compilador clásico. Además, muchas personas están comenzando a mirar exclusivamente a 64 bits, con las aplicaciones de 32 bits actualizadas y las nuevas aplicaciones solo de 64 bits.
Visual Assist es una increíble extensión de productividad para Visual C ++, que brinda finalización de código, refactorizaciones y más. Hemos estado investigando varias formas de integrarlo en C ++ Builder y planeamos hacerlo en la versión 10.5.
Finalmente, también planeamos mejorar la interoperabilidad de Delphi / C ++. Poder usar dos lenguajes es un gran impulso de productividad y una de las razones clave para usar C ++ Builder o RAD Studio, y esto es un trabajo para pulir esa integración. Debería proporcionar una integración más fluida con las funciones RTL.
Depurador de Delphi
En 10.4, presentamos un depurador completamente nuevo para C ++ Win64 (mencionado anteriormente) basado en LLDB. En última instancia, nuestro objetivo es utilizar el mismo depurador en todas las plataformas; hoy utilizamos una combinación de depuradores diferentes. La clave para esto es agregar una interfaz de lenguaje Delphi a LLDB, que le permite evaluar la sintaxis de Delphi en, por ejemplo, el diálogo Evaluar / Modificar. Planeamos presentar la primera plataforma que usa LLDB con esta nueva interfaz en 10.5.
Comentario de Marco sobre los planes 10.5
Plataformas
Con respecto a la plataforma Windows, como se mencionó anteriormente, planeamos ofrecer soporte para las diversas tecnologías que forman parte de Microsoft Project Reunion. En particular, en la versión RAD Studio 10.5, esperamos integrar el soporte para Windows UX moderno a través de la biblioteca WinUI 3. De acuerdo con la hoja de ruta de Microsoft para la biblioteca, debería ser posible utilizar los componentes de esta biblioteca en una aplicación nativa basada en la API clásica, mezclando formas y controles de diferentes tipos. Los detalles reales dependerán de lo que ofrecerá la biblioteca en términos de integración con aplicaciones nativas, pero nuestro plan actual es integrar esta biblioteca en la VCL con nuevos controles específicos.
Hablando de plataformas, queremos agregar un nuevo objetivo para las aplicaciones Delphi: un nuevo compilador para la versión basada en ARM del sistema operativo macOS con hardware Apple impulsado por CPU Apple Silicon. Si bien puede ejecutar aplicaciones Intel, el objetivo es tener una aplicación ARM nativa para la nueva generación de Mac.
Esta será una extensión significativa de Delphi, que incluye un nuevo compilador, actualizaciones de la biblioteca en tiempo de ejecución y las diversas bibliotecas de alto nivel. También tenemos planes de expandir la sintaxis del lenguaje Delphi para todas las plataformas y mejorar el rendimiento del código de procesamiento matemático que genera el compilador en Windows, haciendo que las aplicaciones sean más rápidas en el procesamiento numérico.
También continuaremos trabajando en la calidad general del producto y planearemos seleccionar algunos subsistemas en los que enfocarnos, una decisión que tomaremos al evaluar los comentarios de los clientes sobre la versión actual y las próximas actualizaciones.
Resumen
¡Tenemos grandes planes para las próximas versiones de Delphi, C ++ Builder y RAD Studio! Desde cambios emocionantes para completar el código para ambos lenguajes, hasta un IDE de alto DPI, mejoras de productividad al codificar y diseñar, interfaz de usuario de Windows y nuevos componentes VCL, depuración de Delphi, soporte de Apple Silicon (M1) para Delphi, trabajo de calidad para los compiladores, Delphi y C ++ RTL, SOAP, multinivel y más, un nuevo enlazador para C ++: las próximas versiones contienen un trabajo realmente emocionante. ¡Estamos ansiosos por enviárselos!
Nota: Estos planes y la hoja de ruta representan nuestras intenciones a esta fecha, pero nuestros planes de desarrollo y prioridades están sujetos a cambios. En consecuencia, no podemos ofrecer ningún compromiso u otra forma de garantía de que finalmente lanzaremos alguno o todos los productos descritos en el cronograma o en el orden descrito, o en absoluto. Estas indicaciones generales de cronogramas de desarrollo o “hojas de ruta de productos” no deben interpretarse ni interpretarse como ninguna forma de compromiso, y los derechos de nuestros clientes a las actualizaciones, actualizaciones, mejoras y otras versiones de mantenimiento se establecerán únicamente en el acuerdo de licencia de software correspondiente. .
Это сообщение в блоге включает слайды с последним планом развития RAD Studio, включая планы для Delphi и C ++ Builder. Вы также можете прочитать соответствующую запись в блоге RAD Studio November 2020 Roadmap PM Commentary для более подробного описания запланированных функций.
Esta postagem de blog inclui os slides do roteiro mais recente do RAD Studio, incluindo planos para Delphi e C ++ Builder. Você também pode ler a postagem do blog do RAD Studio Novembro 2020 Roadmap PM Commentary para uma descrição mais detalhada dos recursos planejados.
Dieser Blog-Beitrag enthält die Folien der neuesten RAD Studio-Roadmap, einschließlich der Pläne für Delphi und C ++ Builder. Eine detailliertere Beschreibung der geplanten Funktionen finden Sie auch im zugehörigen Blog-Beitrag zu RAD Studio November 2020 Roadmap PM-Kommentar.
Hello, and welcome to the last episode of this Flutter series! 👋
In the previous episodes, we looked at some basic Dart and Flutter concepts ranging from data structures and types, OOP and asynchrony to widgets, layouts, states, and props.
Alongside this course, I promised you (several times) that we’d build a fun mini-game in the last episode of this series - and the time has come.
The game we’ll build: ShapeBlinder
The name of the project is shapeblinder.
Just a little fun fact: I’ve already built this project in PowerPoint and Unity a few years ago. 😎 If you’ve read my previous, React-Native focused series, you may have noticed that the name is a bit alike to the name of the project in that one (colorblinder), and that’s no coincidence: this project is a somewhat similar minigame, and it’s the next episode of that casual game series.
We always talk about how some people just have a natural affinity for coding, or how some people feel the code after some time. While a series can’t help you getting to this level, we could write some code that we can physically feel when it’s working, so we’ll be aiming for that.
The concept of this game is that there is a shape hidden on the screen. Tapping the hidden shape will trigger a gentle haptic feedback on iPhones and a basic vibration on Android devices. Based on where you feel the shape, you’ll be able to guess which one of the three possible shapes is hidden on the screen.
Before getting to code, I created a basic design for the project. I kept the feature set, the distractions on the UI, and the overall feeling of the app as simple and chic as possible. This means no colorful stuff, no flashy stuff, some gentle animations, no in-app purchases, no ads, and no tracking.
We’ll have a home screen, a game screen and a “you lost” screen. A title-subtitle group will be animated across these screens. Tapping anywhere on the home screen will start, and on the lost screen will restart the game. We’ll also have some data persistency for storing the high scores of the user.
The full source code is available on GitHub here. You can download the built application from both Google Play and App Store.
Now go play around with the game, and after that, we’ll get started! ✨
Initializing the project
First, and foremost, I used the already discussed flutter create shapeblinder CLI command. Then, I deleted most of the code and created my usual go-to project structure for Flutter:
Inside the lib, I usually create a core and a ui directory to separate the business logic from the UI code. Inside the ui dir, I also add a screens and widgets directory. I like keeping these well-separated - however, these are just my own preferences!
Feel free to experiment with other project structures on your own and see which one is the one you naturally click with. (The most popular project structures you may want to consider are MVC, MVVM, or BLoC, but the possibilities are basically endless!)
After setting up the folder structure, I usually set up the routing with some very basic empty screens. To achieve this, I created a few dummy screens inside the lib/ui/screens/.... A simple centered text widget with the name of the screen will do it for now:
Notice that I only used classes, methods, and widgets that we previously discussed. Just a basic StatelessWidget with a Scaffold so that our app has a body, and a Text wrapped with a Center. Nothing heavy there. I copied and pasted this code into the Game.dart and Lost.dart files too, so that I can set up the routing in the main.dart:
// lib/main.dart
import 'package:flutter/material.dart';
// import the screens we created in the previous step
import './ui/screens/Home.dart';
import './ui/screens/Game.dart';
import './ui/screens/Lost.dart';
// the entry point to our app
void main() {
runApp(Shapeblinder());
}
class Shapeblinder extends StatelessWidget {
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'ShapeBlinder',
// define the theme data
// i only added the fontFamily to the default theme
theme: ThemeData(
primarySwatch: Colors.grey,
visualDensity: VisualDensity.adaptivePlatformDensity,
fontFamily: "Muli",
),
home: Home(),
// add in the routes
// we'll be able to use them later in the Navigator.pushNamed method
routes: <String, WidgetBuilder>{
'/home': (BuildContext context) => Home(),
'/game': (BuildContext context) => Game(),
'/lost': (BuildContext context) => Lost(),
},
);
}
}
Make sure that you read the code comments for some short inline explanation! Since we already discussed these topics, I don’t really want to take that much time into explaining these concepts from the ground up - we’re just putting them into practice to see how they work before you get your hands dirty with real-life projects.
Adding assets, setting up the font
You may have noticed that I threw in a fontFamily: “Muli” in the theme data. How do we add this font to our project? There are several ways: you could, for example, use the Google Fonts package, or manually add the font file to the project. While using the package may be handy for some, I prefer bundling the fonts together with the app, so we’ll add them manually.
The first step is to acquire the font files: in Flutter, .ttf is the preferred format. You can grab the Muli font this project uses from Google Fonts here.
(Update: the font has been removed from Google Fonts. You’ll be able to download it soon bundled together with other assets such as the app icon and the svgs, or you could also use a new, almost identical font by the very same author, Mulish).
Then, move the files somewhere inside your project. The assets/fonts directory is a perfect place for your font files - create it, move the files there and register the fonts in the pubspec.yaml:
You can see that we were able to add the normal and italic versions in a single family: because of this, we won’t need to use altered font names (like “Muli-Italic”). After this - boom! You’re done. 💥 Since we previously specified the font in the app-level theme, we won’t need to refer to it anywhere else - every rendered text will use Muli from now on.
Now, let’s add some additional assets and the app icon. We’ll have some basic shapes as SVGs that we’ll display on the bottom bar of the Game screen. You can grab every asset (including the app icon, font files, and svgs) from here. You can just unzip this and move it into the root of your project and expect everything to be fine.
Before being able to use your svgs in the app, you need to register them in the pubspec.yaml, just like you had to register the fonts:
And finally, to set up the launcher icon (the icon that shows up in the system UI), we’ll use a handy third-party package flutter_launcher_icons. Just add this package into the dev_dependencies below the normal deps in the pubspec.yaml:
...and then configure it, either in the pubspec.yaml or by creating a flutter_launcher_icons.yaml config file. A very basic configuration is going to be just enough for now:
And then, you can just run the following commands, and the script will set up the launcher icons for both Android and iOS:
flutter pub get
flutter pub run flutter_launcher_icons:main
After installing the app either on a simulator, emulator, or a connected real-world device with flutter run, you’ll see that the app icon and the font family is set.
You can use a small r in the CLI to reload the app and keep its state, and use a capital R to restart the application and drop its state. (This is needed when big changes are made in the structure. For example, a StatelessWidget gets converted into a stateful one; or when adding new dependencies and assets into your project.)
Building the home screen
Before jumping right into coding, I always like to take my time and plan out how I’ll build that specific screen based on the screen designs. Let’s have another, closer look at the designs I made before writing them codez:
We can notice several things that will affect the project structure:
The Home and the Lost screen look very identical to each other
All three screens have a shared Logo component with a title (shapeblinder / you lost) and a custom subtitle
So, let’s break down the Home and Lost screens a bit:
The first thing we’ll notice is that we’ll need to use a Column for the layout. (We may also think about the main and cross axis alignments - they are center and start, respectively. If you wouldn’t have known it by yourself, don’t worry - you’ll slowly develop a feeling for it. Until then, you can always experiment with all the options you have until you find the one that fits.)
After that, we can notice the shared Logo or Title component and the shared Tap component. Also, the Tap component says “tap anywhere [on the screen] to start (again)”. To achieve this, we’ll wrap our layout in a GestureDetector so that the whole screen can respond to taps.
Let’s hit up Home.dart and start implementing our findings. First, we set the background color in the Scaffold to black:
return Scaffold(
backgroundColor: Colors.black,
And then, we can just go on and create the layout in the body. As I already mentioned, I’ll first wrap the whole body in a GestureDetector. It is a very important step because later on, we’ll just be able to add an onTap property, and we’ll be just fine navigating the user to the next screen.
Inside the GestureDetector, however, I still won’t be adding the Column widget. First, I’ll wrap it in a SafeArea widget. SafeArea is a handy widget that adds additional padding to the UI if needed because of the hardware (for example, because of a notch, a swipeable bottom bar, or a camera cut-out). Then, inside that, I’ll also add in a Padding so that the UI can breathe, and inside that, will live our Column. The widget structure looks like this so far:
Oh, and by the way, just to flex with the awesome tooling of Flutter - you can always have a peek at how your widget structure looks like in the VS Code sidebar:
And this is how our code looks right now:
import 'package:flutter/material.dart';
class Home extends StatelessWidget {
@override
Widget build(BuildContext context) {
return Scaffold(
backgroundColor: Colors.black,
body: GestureDetector(
// tapping on empty spaces would not trigger the onTap without this
behavior: HitTestBehavior.opaque,
onTap: () {
// navigate to the game screen
},
// SafeArea adds padding for device-specific reasons
// (e.g. bottom draggable bar on some iPhones, etc.)
child: SafeArea(
child: Padding(
padding: const EdgeInsets.all(40.0),
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
crossAxisAlignment: CrossAxisAlignment.start,
children: <Widget>[
],
),
),
),
),
);
}
}
Creating Layout template
And now, we have a nice frame or template for our screen. We’ll use the same template on all three screens of the app (excluding the Game screen where we won’t include a GestureDetector), and in cases like this, I always like to create a nice template widget for my screens. I’ll call this widget Layout now:
// lib/ui/widgets/Layout.dart
import 'package:flutter/material.dart';
class Layout extends StatelessWidget {
// passing named parameters with the ({}) syntax
// the type is automatically inferred from the type of the variable
// (in this case, the children prop will have a type of List<Widget>)
Layout({this.children});
final List<Widget> children;
@override
Widget build(BuildContext context) {
return Scaffold(
backgroundColor: Colors.black,
// SafeArea adds padding for device-specific reasons
// (e.g. bottom draggable bar on some iPhones, etc.)
body: SafeArea(
child: Padding(
padding: const EdgeInsets.all(40.0),
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
crossAxisAlignment: CrossAxisAlignment.start,
children: children,
),
),
),
);
}
}
Now, in the Home.dart, we can just import this layout and wrap it in a GestureDetector, and we’ll have the very same result that we had previously, but we saved tons of lines of code because we can reuse this template on all other screens:
import 'package:flutter/material.dart';
import 'package:flutter/services.dart';
import "../widgets/Layout.dart";
class Home extends StatelessWidget {
@override
Widget build(BuildContext context) {
return GestureDetector(
// tapping on empty spaces would not trigger the onTap without this
behavior: HitTestBehavior.opaque,
onTap: () {
// navigate to the game screen
},
child: Layout(
children: <Widget>[
],
),
);
}
}
Oh, and remember this because it’s a nice rule of thumb: whenever you find yourself copying and pasting code from one widget to another, it’s time to extract that snippet into a separate widget. It really helps to keep spaghetti code away from your projects. 🍝
Now that the overall wrapper and the GestureDetector is done, there are only a few things left on this screen:
Implementing the navigation in the onTap prop
Building the Logo widget (with the title and subtitle)
Building the Tap widget (with that circle-ey svg, title, and subtitle)
Implementing navigation
Inside the GestureDetector, we already have an onTap property set up, but the method itself is empty as of now. To get started with it, we should just throw in a console.log, or, as we say in Dart, a print statement to see if it responds to our taps.
onTap: () {
// navigate to the game screen
print("hi!");
},
Now, if you run this code with flutter run, anytime you’ll tap the screen, you’ll see “hi!” being printed out into the console. (You’ll see it in the CLI.)
That’s amazing! Now, let’s move forward to throwing in the navigation-related code. We already looked at navigation in the previous episode, and we already configured named routes in a previous step inside the main.dart, so we’ll have a relatively easy job now:
onTap: () {
// navigate to the game screen
Navigator.pushNamed(context, "/game");
},
And boom, that’s it! Tapping anywhere on the screen will navigate the user to the game screen. However, because both screens are empty, you won’t really notice anything - so let’s build the two missing widgets!
Building the Logo widget, Hero animation with text in Flutter
Let’s have another look at the Logo and the Tap widgets before we implement them:
We’ll start with the Logo widget because it’s easier to implement. First, we create an empty StatelessWidget:
// lib/ui/widgets/Logo.dart
import "package:flutter/material.dart";
class Logo extends StatelessWidget {
}
Then we define two properties, title and subtitle, with the method we already looked at in the Layout widget:
import "package:flutter/material.dart";
class Logo extends StatelessWidget {
Logo({this.title, this.subtitle});
final String title;
final String subtitle;
@override
Widget build(BuildContext context) {
}
}
And now, we can just return a Column from the build because we are looking forward to rendering two text widgets underneath each other.
And notice how we were able to just use title and subtitle even though they are properties of the widget. We’ll also add in some text styling, and we’ll be done for now - with the main body.
return Column(
crossAxisAlignment: CrossAxisAlignment.start,
children: <Widget>[
Text(
title,
style: TextStyle(
fontWeight: FontWeight.bold,
fontSize: 34.0,
color: Colors.white,
),
),
Text(
subtitle,
style: TextStyle(
fontSize: 24.0,
// The Color.xy[n] gets a specific shade of the color
color: Colors.grey[600],
fontStyle: FontStyle.italic,
),
),
],
)
Now this is cool and good, and it matches what we wanted to accomplish - however, this widget could really use a nice finishing touch. Since this widget is shared between all of the screens, we could add a really cool Hero animation. The Hero animation is somewhat like the Magic Move in Keynote. Go ahead and watch this short Widget of The Week episode to know what a Hero animation is and how it works:
This is very cool, isn’t it? We’d imagine that just wrapping our Logo component in a Hero and passing a key would be enough, and we’d be right, but the Text widget’s styling is a bit odd in this case. First, we should wrap the Column in a Hero and pass in a key like the video said:
return Hero(
tag: "title",
transitionOnUserGestures: true,
child: Column(
crossAxisAlignment: CrossAxisAlignment.start,
children: <Widget>[
Text(
title,
style: TextStyle(
fontWeight: FontWeight.bold,
fontSize: 34.0,
color: Colors.white,
),
),
Text(
subtitle,
style: TextStyle(
fontSize: 24.0,
// The Color.xy[n] gets a specific shade of the color
color: Colors.grey[600],
fontStyle: FontStyle.italic,
),
),
],
),
);
But when the animation is happening, and the widgets are moving around, you’ll see that Flutter drops the font family and the Text overflows its container. So we’ll need to hack around Flutter with some additional components and theming data to make things work:
import "package:flutter/material.dart";
class Logo extends StatelessWidget {
Logo({this.title, this.subtitle});
final String title;
final String subtitle;
@override
Widget build(BuildContext context) {
return Hero(
tag: "title",
transitionOnUserGestures: true,
child: Material(
type: MaterialType.transparency,
child: Container(
width: MediaQuery.of(context).size.width,
child: Column(
crossAxisAlignment: CrossAxisAlignment.start,
children: <Widget>[
Text(
title,
style: TextStyle(
fontWeight: FontWeight.bold,
fontSize: 34.0,
color: Colors.white,
),
),
Text(
subtitle,
style: TextStyle(
fontSize: 24.0,
// The Color.xy[n] gets a specific shade of the color
color: Colors.grey[600],
fontStyle: FontStyle.italic,
),
),
],
),
),
),
);
}
}
This code will ensure that the text has enough space even if the content changes between screens (which will of course happen), and that the font style doesn’t randomly change while in-flight (or while the animation is happening).
Now, we’re finished with the Logo component, and it will work and animate perfectly and seamlessly between screens.
Building the Tap widget, rendering SVGs in Flutter
The Tap widget will render an SVG, a text from the props, and the high score from the stored state underneath each other. We could start by creating a new widget in the lib/ui/widgets directory. However, we’ll come to a dead-end after writing a few lines of code as Flutter doesn’t have native SVG rendering capabilities. Since we want to stick with SVGs instead of rendering them into PNGs, we’ll have to use a 3rd party package, flutter_svg.
To install it, we just simply add it to the pubspec.yaml into the dependencies:
dependencies:
flutter:
sdk: flutter
cupertino_icons: ^0.1.3
flutter_svg: any
And after saving the file, VS Code will automatically run flutter pub get and thus install the dependencies for you. Another great example of the powerful Flutter developer tooling! 🧙
Now, we can just create a file under lib/ui/widgets/Tap.dart, import this dependency, and expect things to be going fine. If you were already running an instance of flutter run, you’ll need to restart the CLI when adding new packages (by hitting Ctrl-C to stop the current instance and running flutter run again):
// lib/ui/widgets/Tap.dart
import "package:flutter/material.dart";
// import the dependency
import "package:flutter_svg/flutter_svg.dart";
We’ll just start out with a simple StatelessWidget now, but we’ll refactor this widget later after we implemented storing the high scores! Until then, we only need to think about the layout: it’s a Column because children are underneath each other, but we wrap it into a Center so that it’s centered on the screen:
import "package:flutter/material.dart";
// import the dependency
import "package:flutter_svg/flutter_svg.dart";
class Tap extends StatelessWidget {
@override
Widget build(BuildContext context) {
return Center(
child: Column(
children: <Widget>[
],
),
);
}
}
Now you may be wondering that setting the crossAxisAlignment: CrossAxisAlignment.center in the Column would center the children of the column, so why the Center widget?
The crossAxisAlignment only aligns children inside its parent’s bounds, but the Column doesn’t fill up the screen width. (You could, however, achieve this by using the Flexible widget, but that would have some unexpected side effects.).
On the other hand, Center aligns its children to the center of the screen. To understand why we need the Center widget and why setting crossAxisAlignment to center isn’t just enough, I made a little illustration:
Now that this is settled, we can define the properties of this widget:
Tap({this.title});
final String title;
And move on to building the layout. First comes the SVG - the flutter_svg package exposes an SvgPicture.asset method that will return a Widget and hence can be used in the widget tree, but that widget will always try to fill up its ancestor, so we need to restrict the size of it. We can use either a SizedBox or a Container for this purpose. It’s up to you:
And we’ll just render the two other texts (the one that comes from the props and the best score) underneath each other, leaving us to this code:
import "package:flutter/material.dart";
// import the dependency
import "package:flutter_svg/flutter_svg.dart";
class Tap extends StatelessWidget {
Tap({this.title});
final String title;
@override
Widget build(BuildContext context) {
return Center(
child: Column(
children: <Widget>[
Container(
height: 75,
child: SvgPicture.asset(
"assets/svg/tap.svg",
semanticsLabel: 'tap icon',
),
),
// give some space between the illustration and the text:
Container(
height: 14,
),
Text(
title,
style: TextStyle(
fontSize: 18.0,
color: Colors.grey[600],
),
),
Text(
"best score: 0",
style: TextStyle(
fontSize: 18.0,
color: Colors.grey[600],
fontStyle: FontStyle.italic,
),
),
],
),
);
}
}
Always take your time examining the code examples provided, as you’ll soon start writing code just like this.
Putting it all together into the final Home screen
Now that all two widgets are ready to be used on our Home and Lost screens, we should get back to the Home.dart and start putting them together into a cool screen.
First, we should import these classes we just made:
And inside the Layout, we already have a blank space as children, we should just fill it up with our new, shiny components:
import 'package:flutter/material.dart';
import 'package:flutter/services.dart';
import "../widgets/Layout.dart";
import "../widgets/Logo.dart";
import "../widgets/Tap.dart";
class Home extends StatelessWidget {
@override
Widget build(BuildContext context) {
return GestureDetector(
// tapping on empty spaces would not trigger the onTap without this
behavior: HitTestBehavior.opaque,
onTap: () {
// navigate to the game screen
HapticFeedback.lightImpact();
Navigator.pushNamed(context, "/game");
},
child: Layout(
children: <Widget>[
Logo(
title: "shapeblinder",
subtitle: "a game with the lights off",
),
Tap(
title: "tap anywhere to start",
),
],
),
);
}
}
And boom! After reloading the app, you’ll see that the new widgets are on-screen. There’s only one more thing left: the alignment is a bit off on this screen, and it doesn’t really match the design. Because of that, we’ll add in some Spacers.
In Flutter, a Spacer is your <div style={{ flex: 1 }}/>, except that they are not considered to be a weird practice here. Their sole purpose is to fill up every pixel of empty space on a screen, and we can also provide them a flex value if we want one Spacer to be larger than another.
In our case, this is exactly what we need: we’ll need one large spacer before the logo and a smaller one after the logo:
Spacer(
flex: 2,
),
// add hero cross-screen animation for title
Logo(
title: "shapeblinder",
subtitle: "a game with the lights off",
),
Spacer(),
Tap(
title: "tap anywhere to start",
),
And this will push everything into place.
Building the Lost screen, passing properties to screens in Flutter with Navigator
Because the layout of the Lost screen is an exact copy of the Home screen except some differences here and there, we’ll just copy and paste the Home.dart into the Lost.dart and modify it like this:
class Lost extends StatelessWidget {
@override
Widget build(BuildContext context) {
return GestureDetector(
behavior: HitTestBehavior.opaque,
onTap: () {
// navigate to the game screen
Navigator.pop(context);
},
child: Layout(
children: <Widget>[
Spacer(
flex: 2,
),
Logo(
title: "you lost",
subtitle: "score: 0",
),
Spacer(),
Tap(
title: "tap anywhere to start again",
),
],
),
);
}
}
However, this just won’t be enough for us now. As you can see, there is a hard-coded “score: 0” on the screen. We want to pass the score as a prop to this screen, and display that value here.
To pass properties to a named route in Flutter, you should create an arguments class. In this case, we’ll name it LostScreenArguments. Because we only want to pass an integer (the points of the user), this class will be relatively simple:
// passing props to this screen with arguments
// you'll need to construct this class in the sender screen, to
// (in our case, the Game.dart)
class LostScreenArguments {
final int points;
LostScreenArguments(this.points);
}
And we can extract the arguments inside the build method:
@override
Widget build(BuildContext context) {
// extract the arguments from the previously discussed class
final LostScreenArguments args = ModalRoute.of(context).settings.arguments;
// you'll be able to access it by: args.points
And just use the ${...}string interpolation method in the Text widget to display the score from the arguments:
Logo(
title: "you lost",
// string interpolation with the ${} syntax
subtitle: "score: ${args.points}",
),
And boom, that’s all the code needed for receiving arguments on a screen! We’ll look into passing them later on when we are building the Game screen…
Building the underlying Game logic
...which we’ll start right now. So far, this is what we’ve built and what we didn’t implement yet:
✅ Logo widget
✅ Hero animation
✅ Tap widget
✅ Rendering SVGs
✅ Home screen
✅ Lost screen
✅ Passing props
Underlying game logic
Game screen
Drawing shapes
Using haptic feedback
Storing high scores - persistent data
So there’s still a lot to learn! 🎓First, we’ll build the underlying game logic and classes. Then, we’ll build the layout for the Game screen. After that, we’ll draw shapes on the screen that will be tappable. We’ll hook them into our logic, add in haptic feedback, and after that, we’ll just store and retrieve the high scores, test the game on a real device, and our game is going to be ready for production!
The underlying game logic will pick three random shapes for the user to show, and it will also pick one correct solution. To pass around this generated data, first, we’ll create a class named RoundData inside the lib/core/RoundUtilities.dart:
class RoundData {
List<String> options;
int correct;
RoundData({this.options, this.correct});
}
Inside the assets/svg directory, we have some shapes lying around. We’ll store the names of the files in an array of strings so that we can pick random strings from this list:
// import these!!
import 'dart:core';
import 'dart:math';
class RoundData {
List<String> options;
int correct;
RoundData({this.options, this.correct});
}
// watch out - new code below!
Random random = new Random();
// the names represent all the shapes in the assets/svg directory
final List<String> possible = [
"circle",
"cross",
"donut",
"line",
"oval",
"square"
];
And notice that I also created a new instance of the Random class and imported a few native Dart libraries. We can use this random variable to get new random numbers between two values:
// this will generate a new random int between 0 and 5
random.nextInt(5);
The nextInt’s upper bound is exclusive, meaning that the code above can result in 0, 1, 2, 3, and 4, but not 5.
To get a random item from an array, we can combine the .length property with this random number generator method:
int randomItemIndex = random.nextInt(array.length);
Then, I’ll write a method that will return a RoundData instance:
RoundData generateRound() {
// new temporary possibility array
// we can remove possibilities from it
// so that the same possibility doesn't come up twice
List<String> temp = possible.map((item) => item).toList();
// we'll store possibilities in this array
List<String> res = new List<String>();
// add three random shapes from the temp possibles to the options
for (int i = 0; i < 3; i++) {
// get random index from the temporary array
int randomItemIndex = random.nextInt(temp.length);
// add the randomth item of the temp array to the results
res.add(temp[randomItemIndex]);
// remove possibility from the temp array so that it doesn't come up twice
temp.removeAt(randomItemIndex);
}
// create new RoundData instance that we'll be able to return
RoundData data = RoundData(
options: res,
correct: random.nextInt(3),
);
return data;
}
Take your time reading the code with the comments and make sure that you understand the hows and whys.
Game screen
Now that we have the underlying game logic in the lib/core/RoundUtilities.dart, let’s navigate back into the lib/ui/screens/Game.dart and import the utilities we just created:
And since we’d like to update this screen regularly (whenever a new round is generated), we should convert the Game class into a StatefulWidget. We can achieve this with a VS Code shortcut (right-click on class definition > Refactor… > Convert to StatefulWidget):
And now, we’ll build the layout. Let’s take a look at the mock for this screen:
Our screen already contains the shared Logo widget, and we’ll work with drawing shapes a bit later, so we’ll only have to cover
Proper spacing with Spacers
Creating a container for our shape
Drawing the three possible shapes on the bottom of the screen
Hooking them up to a tap handler
If the guess is correct, show a SnackBar and create a new round
If the guess in incorrect, end the session and navigate the user to the lost screen
Initializing data flow
So let’s get started! First, I’ll define the variables inside the state. Since this is a StatefulWidget, we can just define some variables inside the State and expect them to be hooked up to Flutter’s inner state management engine.
I’d also like to give them some values., so I’ll create a reset method. It will set the points to zero and create a new round with the generator we created previously. We’ll run this method when the initState method runs so that the screen is initialized with game data:
class _GameState extends State<Game> {
RoundData data;
int points = 0;
int high = 0;
final GlobalKey scaffoldKey = GlobalKey();
// the initState method is ran by Flutter when the element is first time painted
// it's like componentDidMount in React
@override
void initState() {
reset();
super.initState();
}
void reset() {
setState(() {
points = 0;
data = generateRound();
});
}
...
And now, we can move on to defining our layout:
Initializing the UI
Now that we have some data we can play around with, we can create the overall layout of this screen. First, I’ll create a runtime constant (or a final) I’ll call width. It will contain the available screen width:
@override
Widget build(BuildContext context) {
final width = MediaQuery.of(context).size.width;
I can use this to create a perfect square container for the shape that we’ll render later:
And we can use the state’s RoundData instance, data, to know which three possible shapes we need to render out. We can just simply map over it and use the spread operator to pass the results into the Row:
This will map over the three possibilities in the state, render their corresponding icons in a sized container, and add a GestureDetector to it so that we can know when the user taps on the shape (or when the user makes a guess). For the guess method, we’ll pass the current BuildContext and the name of the shape the user had just tapped on. We’ll look into why the context is needed in a bit, but first, let’s just define a boilerplate void and print out the name of the shape the user tapped:
And we should also create a correctGuess and a lost handler:
void correctGuess(BuildContext context) {
// show snackbar
Scaffold.of(context).showSnackBar(
SnackBar(
backgroundColor: Colors.green,
duration: Duration(seconds: 1),
content: Column(
mainAxisAlignment: MainAxisAlignment.center,
crossAxisAlignment: CrossAxisAlignment.center,
children: <Widget>[
Icon(
Icons.check,
size: 80,
),
Container(width: 10),
Text(
"Correct!",
style: TextStyle(
fontSize: 24,
fontWeight: FontWeight.bold,
),
),
],
),
),
);
// add one point, generate new round
setState(() {
points++;
data = generateRound();
});
}
void lost() {
// navigate the user to the lost screen
Navigator.pushNamed(
context,
"/lost",
// pass arguments with this constructor:
arguments: LostScreenArguments(points),
);
// reset the game so that when the user comes back from the "lost" screen,
// a new, fresh round is ready
reset();
}
There’s something special about the correctGuess block: the Scaffold.of(context) will look up the Scaffold widget in the context. However, the context we are currently passing comes from the build(BuildContext context) line, and that context doesn’t contain a Scaffold yet. We can create a new BuildContext by either extracting the widget into another widget (which we won’t be doing now), or by wrapping the widget in a Builder.
So I’ll wrap the Row with the icons in a Builder and I’ll also throw in an Opacity so that the icons have a nice gray color instead of being plain white:
And now, when tapping on the shapes on the bottom, the user will either see a full-screen green snackbar with a check icon and the text “Correct!”, or find themselves on the “Lost” screen. Great! Now, there’s only one thing left before we can call our app a game - drawing the tappable shape on the screen.
Drawing touchable shapes in Flutter
Now that we have the core game logic set up and we have a nice Game screen we can draw on, it’s time to get dirty with drawing on a canvas. Whilst we could use Flutter’s native drawing capabilities, we’d lack a very important feature - interactivity.
Lucky for us, there’s a package that despite having a bit limited drawing capabilities, has support for interactivity - and it’s called touchable. Let’s just add it into our dependencies in the pubspec.yaml:
touchable: any
And now, a few words about how we’re going to achieve drawing shapes. I’ll create some custom painters inside lib/core/shapepainters. They will extend the CustomPainter class that comes from the touchable library. Each of these painters will be responsible for drawing a single shape (e.g. a circle, a line, or a square). I won’t be inserting the code required for all of them inside the article. Instead, you can check it out inside the repository here.
Then, inside the RoundUtilities.dart, we’ll have a method that will return the corresponding painter for the string name of it - e.g. if we pass “circle”, we’ll get the Circle CustomPainter.
We’ll be able to use this method in the Game screen, and we’ll pass the result of this method to the CustomPaint widget coming from the touchable package. This widget will paint the shape on a canvas and add the required interactivity.
Creating a CustomPainter
Let’s get started! First, let’s look at one of the CustomPainters (the other ones only differ in the type of shape they draw on the canvas, so we won’t look into them). First, we’ll initialize an empty CustomPainter with the default methods and two properties, context and onTap:
import 'package:flutter/material.dart';
import 'package:touchable/touchable.dart';
class Square extends CustomPainter {
final BuildContext context;
final Function onTap;
Square(this.context, this.onTap);
@override
void paint(Canvas canvas, Size size) {
}
@override
bool shouldRepaint(CustomPainter oldDelegate) {
return false;
}
}
We’ll use the context later when creating the canvas, and the onTap will be the tap handler for our shape. Now, inside the paint overridden method, we can create a TouchyCanvas coming from the package:
This will create a simple rectangle. The arguments in the Rect.fromLTRB define the coordinates of the two points between which the rect will be drawn. It’s 0, 0 and width / 1.25, width / 1.25 for our shape - this will fill in the container we created on the Game screen.
We also pass a transparent color (so that the shape is hidden) and an onTapDown, which will just run the onTap property which we pass. Noice!
This is it for drawing our square shape. I created the other CustomPainter classes that we’ll need for drawing a circle, cross, donut, line, oval, and square shapes. You could either try to implement them yourself, or just copy and paste them from the repository here.
Drawing the painter on the screen
Now that our painters are ready, we can move on to the second step: the getPainterForName method. First, I’ll import all the painters into the RoundUtilities.dart:
And then just write a very simple switch statement that will return the corresponding painter for the input string:
dynamic getPainterForName(BuildContext context, Function onTap, String name) {
switch (name) {
case "circle":
return Circle(context, onTap);
case "cross":
return Cross(context, onTap);
case "donut":
return Donut(context, onTap);
case "line":
return Line(context, onTap);
case "oval":
return Oval(context, onTap);
case "square":
return Square(context, onTap);
}
}
And that’s it for the utilities! Now, we can move back into the Game screen and use this getPainterForName utility and the canvas to draw the shapes on the screen:
And that’s it! We only need to create an onShapeTap handler to get all these things working - for now, it’s okay to just throw in a print statement, and we’ll add the haptic feedbacks and the vibrations later on:
void onShapeTap() {
print(
"the user has tapped inside the shape. we should make a gentle haptic feedback!",
);
}
And now, when you tap on the shape inside the blank space, the Flutter CLI will pop up this message in the console. Awesome! We only need to add the haptic feedback, store the high scores, and wrap things up from now on.
Adding haptic feedback and vibration in Flutter
When making mobile applications, you should always aim for designing native experiences on both platforms. That means using different designs for Android and iOS, and using the platform’s native capabilities like Google Pay / Apple Pay or 3D Touch. To be able to think about which designs and experiences feel native on different platforms, you should use both platforms while developing, or at least be able to try out them sometimes.
One of the places where Android and iOS devices differ is how they handle vibrations. While Android has a basic vibration capability, iOS comes with a very extensive haptic feedback engine that enables creating gentle hit-like feedback, with custom intensities, curves, mimicking the 3D Touch effect, tapback and more. It helps the user feel their actions, taps, and gestures, and as a developer, it’s a very nice finishing touch for your app to add some gentle haptic feedback to your app. It will help the user feel your app native and make the overall experience better.
Some places where you can try out this advanced haptic engine on an iPhone (6s or later) are the home screen when 3D Touching an app, the Camera app when taking a photo, the Clock app when picking out an alarm time (or any other carousel picker), some iMessage effects, or on notched iPhones, when opening the app switcher from the bottom bar. Other third party apps also feature gentle physical feedback: for example, the Telegram app makes a nice and gentle haptic feedback when sliding for a reply.
Before moving on with this tutorial, you may want to try out this effect to get a feeling of what we are trying to achieve on iOS - and make sure that you are holding the device in your whole palm so that you can feel the gentle tapbacks.
In our app, we’d like to add these gentle haptic feedbacks in a lot of places: when navigating, making a guess, or, obviously, when tapping inside the shape. On Android, we’ll only leverage the vibration engine when the user taps inside a shape or loses.
And since we’d like to execute different code based on which platform the app is currently running on, we need a way to check the current platform in the runtime. Lucky for us, the dart:io provides us with a Platform API that we can ask if the current platform is iOS or Android. We can use the HapticFeedback API from the flutter/services.dart to call the native haptic feedback and vibration APIs:
// lib/core/HapticUtilities.dart
import 'dart:io' show Platform;
import 'package:flutter/services.dart';
void lightHaptic() {
if (Platform.isIOS) {
HapticFeedback.lightImpact();
}
}
void vibrateHaptic() {
if (Platform.isIOS) {
HapticFeedback.heavyImpact();
} else {
// this will work on most Android devices
HapticFeedback.vibrate();
}
}
And we can now import this file on other screens and use the lightHaptic and vibrateHaptic methods to make haptic feedback for the user that works on both platforms that we’re targeting:
// lib/ui/screens/Game.dart
import '../../core/HapticUtilities.dart'; // ADD THIS LINE
...
void guess(BuildContext context, String name) {
lightHaptic(); // ADD THIS LINE
...
void lost() {
vibrateHaptic(); // ADD THIS LINE
...
Container(
height: width / 1.25,
width: width / 1.25,
child: CanvasTouchDetector(
builder: (context) {
return CustomPaint(
painter: getPainterForName(
context,
vibrateHaptic, // CHANGE THIS LINE
And on the Home and Lost screens:
// Home.dart
// Home.dart
return GestureDetector(
// tapping on empty spaces would not trigger the onTap without this
behavior: HitTestBehavior.opaque,
onTap: () {
// navigate to the game screen
lightHaptic(); // ADD THIS LINE
Navigator.pushNamed(context, "/game");
},
...
// Lost.dart
return GestureDetector(
behavior: HitTestBehavior.opaque,
onTap: () {
// navigate to the game screen
lightHaptic(); // ADD THIS LINE
Navigator.pop(context);
},
...aaaaand you’re done for iOS! On Android, there’s still a small thing required - you need permission for using the vibration engine, and you can ask for permission from the system in the shapeblinder/android/app/src/main/AndroidManifest.xml:
Now when running the app on a physical device, you’ll feel either the haptic feedback or the vibration, depending on what kind of device you’re using. Isn’t it amazing? You can literally feel your code!
Storing high scores - data persistency in Flutter
There’s just one new feature left before we finish the MVP of this awesome game. The users are now happy - they can feel a sense of accomplishment when they guess right, and they get points, but they can’t really flex with their highest score for their friends as we don’t store them. We should fix this by storing persistent data in Flutter! 💪
To achieve this, we’ll use the shared_preferences package. It can store simple key/value pairs on the device. You should already know what to do with this dependency: go into pubspec.yaml, add it into the deps, wait until VS Code runs the flutter pub get command automatically or run it by yourself, and then restart the current Flutter session by hitting Ctrl + C and running flutter run again.
Now that the shared_preferences package is injected, we can start using it. The package has two methods that we’ll take use of now: .getInt() and .setInt(). This is how we’ll implement them:
We’ll store the high score when the user loses the game
We’ll retrieve it in the Tap widget, and on the Game screen
Let’s get started by storing the high score! Inside the lib/ui/screens/Game.dart, we’ll create two methods: loadHigh and setHigh:
And because we’re displaying the high score in the Logo widget, we’ll want to call setState when the score is updated - so that the widget gets re-rendered with our new data. We’ll also want to call the loadHigh when the screen gets rendered the first time - so that we’re displaying the actual stored high score for the user:
// the initState method is ran by Flutter when the element is first time painted
// it's like componentDidMount in React
@override
void initState() {
reset();
loadHigh(); // ADD THIS
super.initState();
}
And when the user loses, we’ll store the high score:
void lost() {
vibrateHaptic();
// if the score is higher than the current high score,
// update the high score
if (points > high) {
setHigh(points);
}
...
And that’s it for the game screen! We’ll also want to load the high score on the Tap widget, which - currently - is a StatelessWidget. First, let’s refactor the Tap widget into a StatefulWidget by right-clicking on the name of the class, hitting “Refactor…”, and then “Convert to StatefulWidget”.
Then, define the state variables and use the very same methodology we already looked at to load the high score and update the state:
class _TapState extends State<Tap> {
int high = 0;
void loadHigh() async {
SharedPreferences prefs = await SharedPreferences.getInstance();
setState(() {
high = prefs.getInt('high') ?? 0;
});
}
Then, call this loadHigh method inside the build so that the widget is always caught up on the latest new high score:
Oh, and we should also replace the hard-coded “high score: 0”s with the actual variable that represents the high score:
Text(
"best score: $high",
Make sure that you update your code both in the Game and the Tap widgets. We’re all set now with storing and displaying the high score now, so there’s only one thing left:
Summing our Dart and Flutter series up
Congratulations! 🎉 I can’t really explain with words how far we’ve come into the whole Dart and Flutter ecosystem in these three episodes together:
First, we looked at Dart and OOP: We looked at variables, constants, functions, arrays, objects, object-oriented programming, and asynchrony, and compared these concepts to what we’ve seen in JavaScript.
Then, we started with some Flutter theory: We took a peek at the Flutter CLI, project structuring, state management, props, widgets, layouts, rendering lists, theming, and proper networking.
Then we created a pretty amazing game together: We built a cross-platform game from scratch. We mastered the Hero animation, basic concepts about state management, importing third-party dependencies, building multiple screens, navigating, storing persistent data, adding vibration, and more…
I really hope you enjoyed this course! If you have any questions, feel free to reach out in the comments section. It was a lot to take in, but there’s still even more to learn! If you want to stay tuned, subscribe to our newsletter - and make sure that you check out these awesome official Dart and Flutter related resources later on your development journey:
Вы слышали о функциональном программировании, но не знаете деталей? Готовы ли вы выйти за рамки объектно-ориентированного мышления? Завтра Ник Ходжес , автор книги « Кодирование в Delphi» , научит нас, как использовать методы функционального программирования для создания красивых программ на Delphi. Всего через один день функциональное программирование с помощью Delphi — это разговор, расширяющий знания, который вы не захотите пропустить!
DelphiCon 2020 предлагает десять выступлений и четыре экспертные панели от технических партнеров Embarcadero и самых ценных профессионалов, охватывающих весь спектр программного обеспечения от образования до доступа к промышленным базам данных. Приходите для функционального программирования и уходите с большим пониманием того, как добиться максимальной производительности с помощью Delphi. Конференция бесплатна и открыта для публики. Зарегистрируйтесь сейчас, нажав кнопку «Сохранить мое место» на сайте delphicon.embarcadero.com !
Você já ouviu falar de programação funcional, mas os detalhes são vagos? Você está pronto para expandir além da mentalidade orientada a objetos? Amanhã, Nick Hodges , autor de Coding in Delphi , nos ensinará como aproveitar as técnicas de programação funcional para criar programas bonitos em Delphi. Em apenas um dia, Functional Programming with Delphi é uma palestra que amplia o conhecimento que você não quer perder!
O DelphiCon 2020 oferece dez palestras e quatro painéis de especialistas por parceiros de tecnologia da Embarcadero e Profissionais Mais Valiosos, abrangendo a gama de software, desde educação até acesso a banco de dados industrial. Venha para a programação funcional e saia com um maior entendimento de como maximizar o desempenho com Delphi. A conferência é gratuita e aberta ao público. Inscreva-se agora clicando no botão “Salvar meu assento” em delphicon.embarcadero.com !
¿Ha oído hablar de la programación funcional pero no sabe con certeza los detalles? ¿Estás listo para expandirte más allá de la mentalidad orientada a objetos? Mañana, Nick Hodges , autor de Codificación en Delphi , nos enseñará cómo aprovechar las técnicas de programación funcional para crear hermosos programas en Delphi. ¡A solo un día de distancia, Programación funcional con Delphi es una charla de ampliación de conocimientos que no querrá perderse!
DelphiCon 2020 ofrece diez charlas y cuatro paneles de expertos de los socios tecnológicos de Embarcadero y los Profesionales Más Valiosos que abarcan la gama de software desde la educación hasta el acceso a bases de datos industriales. Venga por la programación funcional y salga con una mayor comprensión de cómo maximizar el rendimiento con Delphi. La conferencia es gratuita y abierta al público. ¡Regístrese ahora haciendo clic en el botón “Guardar mi asiento” en delphicon.embarcadero.com !
Haben Sie von funktionaler Programmierung gehört, sind aber in den Details vage? Sind Sie bereit, über die objektorientierte Denkweise hinaus zu expandieren? Morgen wird uns Nick Hodges , Autor von Coding in Delphi , beibringen, wie man funktionale Programmiertechniken nutzt, um schöne Programme in Delphi zu erstellen. Nur einen Tag entfernt ist Functional Programming mit Delphi ein wissensverbreiternder Vortrag, den Sie nicht missen möchten!
Die DelphiCon 2020 bietet zehn Vorträge und vier Expertengremien von Embarcadero-Technologiepartnern und Most Valuable Professionals, die das gesamte Spektrum der Software von der Ausbildung bis zum Zugriff auf industrielle Datenbanken abdecken. Kommen Sie zur funktionalen Programmierung und lernen Sie besser, wie Sie die Leistung mit Delphi maximieren können. Die Konferenz ist kostenlos und für die Öffentlichkeit zugänglich. Melden Sie sich jetzt an, indem Sie auf delphicon.embarcadero.com auf die Schaltfläche „Sitzplatz speichern“ klicken !
Customer Relationship Management (CRM) is a system to automate and manage the relationship between customers and company. The system covers customers relationship with sales and marketing department and also improve performance and increase productivity.
The CRM systems are always in demand as these are useful and widely used to track the customers and sales records. So if you’re developer and thinking about to develop a CRM system, then you’re here at the right place. In this tutorial you will learn how to develop CRM system with PHP and MySQL.
In this tutorial, we will implement CRM system for sales people to track customers. We will developer Sales Manager section and Sales People section to build the system. We will cover following in this tutorial.
The Sales Manager will be able to do following:
Manage customers
Manage sales team
View sales activities
The Sales People will be able to do following:
Access tasks
View leads
Create new tasks for each lead
Create new opportunity
Close a sale
So let’s start developing CRM system with PHP and MySQL. The major files are:
index.php
sales_people.php
tasks.php
contact.php
leads.php
opportunity.php
User.php: A class contains users methods.
Leads.php: A class contains leads methods.
Tasks.php: A class contains tasks methods.
Opportunity.php: A class contains sales oppotunity methods.
Customer.php: A class contains customer methods.
Step1: Create MySQL Database Table
First we will create MySQL database tables for our system. The major tables are following.
We will create crm_users table to store users details.
CREATE TABLE `crm_users` (
`id` int(11) NOT NULL,
`name` varchar(255) NOT NULL,
`email` varchar(50) NOT NULL,
`password` varchar(50) NOT NULL,
`roles` enum('manager','sales') NOT NULL,
`status` int(11) NOT NULL DEFAULT 0
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
We will create crm_contact table to store conact details.
CREATE TABLE `crm_contact` (
`id` int(11) NOT NULL,
`contact_title` varchar(255) NOT NULL,
`contact_first` varchar(255) NOT NULL,
`contact_middle` varchar(255) NOT NULL,
`contact_last` varchar(255) NOT NULL,
`initial_contact_date` datetime NOT NULL DEFAULT current_timestamp(),
`title` varchar(255) NOT NULL,
`company` varchar(255) NOT NULL,
`industry` varchar(255) NOT NULL,
`address` text NOT NULL,
`city` varchar(255) NOT NULL,
`state` varchar(255) NOT NULL,
`country` varchar(255) NOT NULL,
`zip` int(11) NOT NULL,
`phone` int(11) NOT NULL,
`email` varchar(50) NOT NULL,
`status` enum('Lead','Proposal','Customer / won','Archive') NOT NULL,
`website` varchar(255) NOT NULL,
`sales_rep` int(11) NOT NULL,
`project_type` varchar(255) NOT NULL,
`project_description` text NOT NULL,
`proposal_due_date` varchar(255) NOT NULL,
`budget` int(11) NOT NULL,
`deliverables` varchar(255) NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
and we will create crm_tasks table to store tasks details.
CREATE TABLE `crm_tasks` (
`id` int(11) NOT NULL,
`created` datetime NOT NULL DEFAULT current_timestamp(),
`task_type` varchar(255) NOT NULL,
`task_description` text NOT NULL,
`task_due_date` varchar(255) NOT NULL,
`task_status` enum('Pending','Completed') NOT NULL,
`task_update` varchar(255) NOT NULL,
`contact` int(11) NOT NULL,
`sales_rep` int(11) NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
Step2: Implement User Login and Access
We will implement the user login for sales manager and sales people to access related section to manage system. So we will create login form in index.php file.
Then we will implement login functionality by calling method login() from class User.php.
public function login(){
if($this->email && $this->password) {
$sqlQuery = "
SELECT * FROM ".$this->userTable."
WHERE status = 1
AND roles = ? AND email = ? AND password = ?";
$stmt = $this->conn->prepare($sqlQuery);
$password = md5($this->password);
$stmt->bind_param("sss", $this->loginType, $this->email, $password);
$stmt->execute();
$result = $stmt->get_result();
if($result->num_rows > 0){
$user = $result->fetch_assoc();
$_SESSION["userid"] = $user['id'];
$_SESSION["role"] = $this->loginType;
$_SESSION["name"] = $user['email'];
return 1;
} else {
return 0;
}
} else {
return 0;
}
}
Step3: Manage Contacts
The contact is the information about the people we know and work with. Usually, one sales representative has many contacts. So here we will manage the contact. In contact.php file, we will display the list of contact.
we have also implemented contact add, edit and delete functionality.
Step4: Manage Tasks
Now we will manage activities like meetings, phone calls, emails and any other activities that allow us to interact with customers. We will implement to list the tasks details.
In den vergangenen Jahren haben wir ein wachsendes Interesse an mobilen Computern festgestellt, die die wachsende Zahl von Fernarbeitern unterstützen, und das Jahr 2020 hat viel Aufmerksamkeit auf diese Geschäftsanwendungen gelenkt. Mitarbeiter arbeiten im Grunde in einem „Büro“, das nur aus einem Laptop oder Smartphone besteht. Von zu Hause, dem Büro eines Kunden oder von anderen entfernten Standorten wird auf das Internet zugegriffen. Allein im Jahr 2020 haben uns die technologischen Möglichkeiten von Geräten, Drahtlos- und Satellitenkommunikation gezeigt, wie man Daten von überall auf der Welt zu jeder Zeit und von jedem beliebigen Gerät aus zugänglich machen kann. Bei jedem Vorhaben, das von einer traditionellen Vor-Ort-Umgebung zu einer Umgebung wechselt, in der mobile Geräte für die Bereitstellung von Ergebnissen genutzt werden kann, treten Probleme auf, die gelöst werden müssen. Sicherheit, TCO, Geschäftsrichtlinien und -praktiken sind immer noch Bereiche, in denen wir verstehen müssen, was für unsere Benutzer erreichbar ist und was nicht, um erfolgreich zu sein. Dies gilt für online Verbindungen, solange die Mitarbeiter arbeiten müssen, bis hin zur Sicherheit von Daten im Ruhezustand und in Bewegung. Mobile Datenbanken sind eine der Optionen, die zur Verfügung stehen, um einige dieser Probleme zu mildern.
Da die Risiken der Datensicherheit eine wachsende Bedrohung darstellen, kann die Suche nach einer mobilen Datenbanklösung, die die erforderlichen Datenschutz- und Compliance-Anforderungen für alle Ihre Plattformen bietet, sichere Backups und sichere Offline-Funktionen bieten. In der Regel stehen weniger Optionen zur Verfügung. Eine dieser verbleibenden Optionen ist InterBase ToGo.
InterBase ToGo ist eine transportable Datenbank, die in Ihre Anwendungen unter Android und iOS sowie unter macOS, Windows und Linux eingebettet werden kann. Mit integrierten Sicherheitsfeauture, sehr geringem Platzbedarf und minimalem Verwaltungsaufwand, können Sie die ToGo Edition einbetten, verteilen und haben keinen weiteren Pfelgeaufwand, da Sie wissen, dass Ihre Benutzer- und Unternehmensdaten sicher und geschützt sind.
InterBase ToGo kann problemlos in Ihre RAD-Anwendungen eingebettet werden. Dies macht es zu einer der besten verfügbaren Optionen für ISVs / VARs, die eine Datenbank benötigen, die die mit der Bereitstellung von Geschäftsanwendungen verbundenen Sicherheitsrisiken verringert. Als VAR ermöglicht die Verwendung einer verschlüsselten Datenbank wie IB ToGo Ihrem Unternehmen, sich auf Ihre Geschäfts- und Benutzeranforderungen zu konzentrieren, indem die Bereitstellungszeit verkürzt und die Sicherheitsstandards eingehalten werden.
Verschlüsselung einrichten
Das Aktivieren der Verschlüsselung mit InterBase ist einfach zu bewerkstelligen, aber schwierig zu knacken.
Finden Ihrer InterBase ToGo-Lizenz
In Ihrer RAD Studio Enterprise and Architect-Lizenz ist eine IBToGo-Lizenz enthalten. Sie finden Ihre Lizenz an folgendem Ort:
Wie oben erwähnt, können Sie InterBase, das in Ihre Anwendungen eingebettet ist, auf mehreren Geräten bereitstellen. Lesen Sie, wie Sie IB ToGo auf einem Android-Smartphone bereitstellen.
В предыдущие годы мы наблюдали растущий интерес к мобильным вычислениям, которые обеспечивают поддержку растущего числа удаленных сотрудников, и 2020 год привлек большое внимание к этим бизнес-приложениям. Удаленные сотрудники должны работать из «офиса», который на самом деле представляет собой просто ноутбук или смартфон с доступом в Интернет из дома, офиса клиента или других удаленных мест. Только в 2020 году технологические возможности устройств беспроводной и спутниковой связи показали нам, как сделать данные доступными из любой точки мира в любой момент времени с любого типа устройства. При любом переходе с традиционной локальной среды на среду, позволяющую использовать вычислительные мощности телефонов для получения результатов, возникают проблемы, которые необходимо решить для достижения успеха. Безопасность, совокупная стоимость владения, бизнес-рекомендации и практики — все это области, в которых нам нужно понимать, что можно сделать, а что нет для достижения успеха нашими пользователями; от сетевых подключений, пока работникам необходимо работать, до данных в состоянии покоя и безопасности в движении. Мобильные базы данных — это один из способов решения некоторых из этих проблем.
Поскольку риски утечки данных становятся растущей угрозой, при поиске решения для мобильной базы данных, которое предлагает необходимые требования к защите данных и соответствию на всех ваших платформах, может обеспечить защищенное резервное копирование и безопасные автономные возможности, вариантов, как правило, становится меньше. Один из тех вариантов, который все еще остается, — это InterBase ToGo.
InterBase ToGo — это переносимая база данных, предназначенная для встраивания в ваши приложения на Android и iOS, а также в Mac OS, Windows и Linux. Благодаря встроенной системе безопасности, небольшому размеру и минимальному количеству администратора, ToGo позволяет вам встраивать, развертывать и расслабляться, поскольку вы знаете, что ваши пользовательские и корпоративные данные в безопасности.
InterBase ToGo можно легко встроить в ваши приложения RAD, что делает его одним из лучших вариантов для независимых поставщиков программного обеспечения / VAR, которым требуется база данных, которая снижает риски безопасности, связанные с развертыванием бизнес-приложений. В качестве VAR использование зашифрованной базы данных, такой как IB ToGo, позволяет вашему бизнесу сосредоточиться на своем бизнесе и потребностях пользователей за счет сокращения времени развертывания и соблюдения стандартов безопасности.
Настройка шифрования
Включить шифрование с помощью InterBase легко, но сложно взломать.
Как найти вашу лицензию InterBase ToGo
Лицензия IBToGo включена в вашу лицензию на RAD Studio Enterprise и Architect. Вы можете найти свою лицензию по следующему адресу:
Как упоминалось выше, вы можете развернуть InterBase, встроенный в ваши приложения, на нескольких устройствах, узнайте, как развернуть IB ToGo на смартфоне Android.
Nos anos anteriores, vimos um interesse crescente na computação móvel que fornece suporte para o número crescente de trabalhadores remotos, e 2020 atraiu muita atenção para esses aplicativos de negócios. Os trabalhadores remotos têm os requisitos de trabalhar em um “escritório” que é na verdade apenas um laptop ou smartphone acessando a internet de suas casas, do escritório de um cliente ou de outros locais remotos. Somente em 2020, os recursos de tecnologia dos dispositivos, comunicações sem fio e por satélite nos mostraram como tornar os dados acessíveis de qualquer lugar do mundo a qualquer momento a partir de qualquer tipo de dispositivo. Com qualquer empreendimento que muda de um ambiente local tradicional para um que permite que o poder de computação dos telefones forneça resultados, surgem problemas que precisam ser resolvidos para o sucesso. Segurança, TCO, diretrizes e práticas de negócios ainda são áreas em que precisamos entender o que é e o que não pode ser alcançado para que nossos usuários tenham sucesso; desde conexões online pelo tempo que os funcionários precisarem trabalhar até dados em repouso e segurança em movimento. Os bancos de dados móveis são uma das opções disponíveis para mitigar alguns desses problemas.
Com os riscos de violação de dados uma ameaça crescente, encontrar uma solução de banco de dados móvel que ofereça a proteção de dados e os requisitos de conformidade de que você precisa em todas as suas plataformas, pode fornecer backups protegidos e recursos off-line seguros, tende a haver menos opções disponíveis. Uma das opções que ainda resta é o InterBase ToGo.
O InterBase ToGo é um banco de dados transportável projetado para ser embutido em seus aplicativos Android e iOS, bem como Mac OS, Windows e Linux. Com segurança integrada, uma pequena pegada e um administrador mínimo necessário, o ToGo permite que você incorpore, implante e relaxe, sabendo que seus dados corporativos e de usuário estão protegidos e protegidos.
O InterBase ToGo pode ser facilmente embutido em seus aplicativos RAD, tornando-o uma das melhores opções disponíveis para ISVs / VARs que precisam de um banco de dados que reduza os riscos de segurança associados à implantação de aplicativos de negócios. Como um VAR, o uso de um banco de dados criptografado como o IB ToGo permite que sua empresa se concentre nas necessidades do seu negócio e do usuário, reduzindo o tempo de implantação e mantendo os padrões de conformidade de segurança.
Configurando criptografia
Habilitar a criptografia com o InterBase é fácil de fazer, mas difícil de violar.
Encontrando sua licença InterBase ToGo
Incluída com sua licença RAD Studio Enterprise and Architect está uma licença IBToGo. Você pode encontrar sua licença no seguinte local:
Como mencionado acima, você pode implantar o InterBase embutido em seus aplicativos em vários dispositivos, verifique como implantar o IB ToGo em um smartphone Android.
En años anteriores, hemos visto un interés creciente en la informática móvil que brinda soporte para el número creciente de trabajadores remotos, y 2020 ha atraído mucha atención a estas aplicaciones comerciales. Los trabajadores remotos tienen los requisitos de trabajar desde una “oficina” que en realidad es solo una computadora portátil o un teléfono inteligente que accede a Internet desde sus hogares, la oficina de un cliente u otras ubicaciones remotas. Solo en 2020, las capacidades tecnológicas de los dispositivos, las comunicaciones inalámbricas y por satélite nos han mostrado cómo hacer que los datos sean accesibles desde cualquier parte del mundo en cualquier momento y desde cualquier tipo de dispositivo. Con cualquier esfuerzo que cambie de un entorno local tradicional a uno que permita que la potencia informática de los teléfonos brinde resultados, surgen problemas que deben resolverse para tener éxito. La seguridad, el TCO, las pautas comerciales y las prácticas siguen siendo áreas en las que debemos comprender qué es y qué no se puede lograr para que nuestros usuarios tengan éxito; desde conexiones en línea durante el tiempo que los trabajadores necesiten trabajar hasta datos en reposo y seguridad en movimiento. Las bases de datos móviles son una de las opciones disponibles para mitigar algunos de estos problemas.
Con el riesgo de violación de datos como una amenaza creciente, encontrar una solución de base de datos móvil que ofrezca la protección de datos y los requisitos de cumplimiento que necesita en todas sus plataformas, puede proporcionar copias de seguridad seguras y capacidades fuera de línea seguras, tiende a haber menos opciones disponibles. Una de esas opciones que aún queda es InterBase ToGo.
InterBase ToGo es una base de datos transportable que está diseñada para integrarse en sus aplicaciones en Android e iOS, así como en Mac OS, Windows y Linux. Con seguridad incorporada, una huella pequeña y un administrador mínimo necesario, ToGo le permite integrar, implementar y relajarse, ya que sabe que sus datos corporativos y de usuario están seguros y protegidos.
InterBase ToGo puede integrarse fácilmente en sus aplicaciones RAD, lo que la convierte en una de las mejores opciones disponibles para los ISV / VAR que necesitan una base de datos que reduzca los riesgos de seguridad asociados con la implementación de aplicaciones comerciales. Como VAR, el uso de una base de datos encriptada como IB ToGo permite que su empresa se concentre en sus necesidades comerciales y de los usuarios al reducir el tiempo de implementación y mantenerse al día con los estándares de cumplimiento de seguridad.
Configurar cifrado
Habilitar el cifrado con InterBase es fácil, pero difícil de acceder.
Encontrar su licencia de InterBase ToGo
Con su licencia RAD Studio Enterprise and Architect se incluye una licencia IBToGo. Puede encontrar su licencia en la siguiente ubicación:
Como se mencionó anteriormente, puede implementar InterBase integrado en sus aplicaciones en varios dispositivos, consulte cómo implementar IB ToGo en un teléfono inteligente Android.
Manchmal müssen Sie Komponenten manuell installieren. Möglicherweise wurde das Installationsprogramm für Ihre Delphi-Version nicht aktualisiert, oder es handelt sich um eine Open-Source-Bibliothek ohne Installationsprogramm. Was auch immer der Grund sein mag , hier ist eine kurze Anleitung zusätzlich zu dem, was im DocWiki zu diesem Thema zu finden ist .
Ich werde diesen Leitfaden über die Installation des Radiant Shapes Pack schreiben, das über GetIt erhältlich ist. Ich vermute, es wurde noch nicht für die Installation in 10.4 aktualisiert, und während F & E daran arbeitet, ist dies eine großartige Gelegenheit, um zu lernen, wie man es manuell installiert.
Nach der Installation von GetIt finden Sie es nicht in der IDE und es fehlt in der Paketliste , auf die Sie über Komponente 🡆 Pakete installieren zugreifen, während kein Projekt geöffnet ist
Hier werden alle BPL- Pakete aufgelistet. Klicken Sie auf die Schaltfläche Hinzufügen und suchen Sie nach der BPL
C:Program Files (x86)RaizeRadiantShapes1.4BinRadiantShapesFmx_Design270.bpl (Wenn Sie diese BPL oder diesen Pfad für Radiant Shapes nicht haben, stellen Sie sicher, dass Sie über GetIt installiert haben und das Installationsprogramm manuell ausführen können. C:UsersPublicDocumentsEmbarcaderoStudio21.0CatalogRepositoryRadiantShapes-270-1.2InstallerRadiantShapes.exe)
oder welches Design-Time-Paket Sie auch benötigen. Dadurch werden die Komponenten in der IDE installiert.
Viele Projekte haben sowohl Entwurfszeit- als auch Laufzeitpakete. Ein Design-Time-Paket enthält die Informationen, die für die Installation in der IDE erforderlich sind, sowie alle speziellen Designer, während RunTime-Pakete nur den Code enthalten, der für die Verwendung während RunTime erforderlich ist. Optional können Sie diese Pakete sogar mit Ihrer Binärdatei versenden, um sie zur Laufzeit zu verknüpfen.
Als Nächstes müssen Sie der IDE mitteilen, wo sich die DCUs und optional die Quelldateien befinden. Was ist, wenn Sie nur Quelldateien haben? Kein Problem, öffnen und erstellen Sie alle Pakete mindestens im Release-Modus auf jeder Plattform, die die Bibliothek unterstützt. Dann fahren Sie auf Extras 🡆 Optionen dann Sprache 🡆 Delphi 🡆 Bibliothek .
Vervollständigen Sie dann die Details für jede Plattform, die Sie erstellt haben und die Sie unterstützen möchten:
Ausgewählte Plattform – Gibt an, für welche Plattform Sie unten Details bereitstellen:
Linux 64-Bit, iOS 64-Bit, Win 32-Bit, Win 64-Bit, MacOS 64-Bit, Android 32-Bit, Android 64-Bit und / oder iOS Simulator.
Bibliothekspfad – Dies ist der Pfad zu den Release-DCUs. Einige Leute verweisen hier auf ihre PAS-Dateien, was funktioniert, aber dann kompilieren Sie die Bibliothek mehr als nötig neu.
Radiant Shapes enthält alle DCUs in Unterordnern außerhalb des Pfads C:Program Files (x86)RaizeRadiantShapes1.4Lib
Tipp : Fügen Sie den neuen Pfad in das Bearbeitungsfeld ein, bevor Sie auf die Schaltfläche Durchsuchen klicken, wenn Sie zu einem Unterordner navigieren müssen. Klicken Sie anschließend auf [Hinzufügen], wenn Sie fertig sind.
Dialogfeld „Bibliothekspfade“Speicherort der plattformspezifischen DCU-Ordner für Radiant Shapes C: Programme (x86) Raize RadiantShapes 1.4 Lib
Im Browserpfad können Sie optional einen Pfad zu den PAS-Quelldateien hinzufügen. Auf diese Weise können Sie mit dem Kontextmenüelement “ Deklaration suchen“ aus der IDE zu diesen Quelldateien navigieren.
Für Strahlungsformen befindet sich die Quelle in C:Program Files (x86)RaizeRadiantShapes1.4Source
Mit dem Debug-DCU-Pfad können Sie optional auf die Debug-Version der DCUs verweisen. Dies ist nützlich, wenn die Debug-Version zusätzliche Informationen oder andere Verhaltensweisen enthält.
Radiant Shapes hat keine speziellen Debug-DCUs, daher müssen wir hier nichts hinzufügen.
Sobald Sie diese Einstellungen für jede Plattform vorgenommen haben, können Sie loslegen! Viel Spaß beim Installieren!
Иногда требуется установить компоненты вручную. Возможно, установщик не был обновлен для вашей версии Delphi, или это библиотека с открытым исходным кодом без установщика. Независимо от причины, вот краткое руководство в дополнение к тому, что можно найти в DocWiki по этой теме.
Я собираюсь написать это руководство по установке пакета Radiant Shapes Pack, доступного через GetIt. Я предполагаю, что он еще не был обновлен для установки в 10.4, и пока R&D работают над этим, это отличная возможность узнать, как установить его вручную.
После установки из GetIt вы не найдете его в IDE, и он отсутствует в списке пакетов, доступ к которому можно получить из Компонент 🡆 Установить пакеты, пока ни один проект не открыт.
Здесь перечислены все пакеты BPL . Нажмите кнопку « Добавить» и найдите BPL.
C:Program Files (x86)RaizeRadiantShapes1.4BinRadiantShapesFmx_Design270.bpl (Если у вас нет этого BPL или пути для Radiant Shapes, убедитесь, что вы установили его из GetIt, и вы можете запустить установщик вручную C:UsersPublicDocumentsEmbarcaderoStudio21.0CatalogRepositoryRadiantShapes-270-1.2InstallerRadiantShapes.exe)
или любой другой пакет времени разработки, который вам нужен. Это установит компоненты в IDE.
У многих проектов есть как пакеты времени разработки, так и пакеты времени исполнения. Пакет времени разработки содержит информацию, необходимую для установки в среде IDE, и любые специальные конструкторы, в то время как пакеты RunTime содержат только код, необходимый для использования во время RunTime. При желании вы даже можете отправить эти пакеты вместе со своим двоичным файлом, чтобы связать их во время выполнения.
Затем вам нужно указать среде IDE, где найти DCU и, при необходимости, исходные файлы. Что делать, если у вас есть только исходные файлы? Нет проблем, откройте и соберите все пакеты хотя бы в режиме выпуска на каждой платформе, которую поддерживает библиотека. Затем перейдите в Инструменты 🡆 Параметры, затем Язык 🡆 Delphi 🡆 Библиотека .
Затем заполните сведения о каждой платформе, которую вы создали и хотите поддерживать:
Выбранная платформа — указывает, для какой платформы вы предоставляете сведения ниже:
64-разрядная версия Linux, 64-разрядная версия iOS, 32-разрядная версия Win, 64-разрядная версия Win, 64-разрядная версия macOS, 32-разрядная версия Android, 64-разрядная версия Android и / или симулятор iOS.
Путь к библиотеке — это путь к DCU выпуска. Некоторые люди указывают здесь на свои файлы PAS, что работает, но в конечном итоге вы перекомпилируете библиотеку больше, чем необходимо.
Radiant Shapes включает все DCU в подпапках вне пути. C:Program Files (x86)RaizeRadiantShapes1.4Lib
Совет : вставьте новый путь в поле редактирования, прежде чем нажимать кнопку обзора, если вам нужно перейти к подпапке. Затем не забудьте нажать [Добавить], когда закончите.
Диалог путей к библиотекеРасположение папок DCU для платформы Radiant Shapes C: Program Files (x86) Raize RadiantShapes 1.4 Lib
Путь просмотра — это место, где вы при желании можете добавить путь к исходным файлам PAS. Это позволяет вам переходить к этим исходным файлам из среды IDE с помощью пункта контекстного меню « Найти декларацию» .
Источник сияющих форм находится в C:Program Files (x86)RaizeRadiantShapes1.4Source
Путь отладки DCU позволяет вам дополнительно указать на отладочную версию DCU. Это полезно, если отладочная версия содержит дополнительную информацию или другое поведение.
В Radiant Shapes нет специальных отладочных DCU, поэтому нам не нужно здесь ничего добавлять.
После того, как вы завершили эти настройки для каждой платформы, все готово! Удачной установки!
Às vezes, você precisa instalar componentes manualmente. Talvez o instalador não tenha sido atualizado para sua versão do Delphi, ou é uma biblioteca de código aberto sem um instalador. Seja qual for o motivo, aqui está um pequeno guia além do que é encontrado no DocWiki sobre o assunto.
Vou escrever este guia sobre a instalação do Radiant Shapes Pack disponível via GetIt. Suponho que ainda não foi atualizado para instalar na versão 10.4 e, enquanto o P&D está trabalhando, esta é uma grande oportunidade de aprender como instalá-lo manualmente.
Depois de instalar a partir do GetIt, você não o encontrará no IDE e está faltando na lista de pacotes, que você acessa em Componente 🡆 Instalar Pacotes enquanto nenhum projeto estiver aberto
É aqui que todos os pacotes BPL são listados. Clique no botão Adicionar e navegue para encontrar o BPL
C:Program Files (x86)RaizeRadiantShapes1.4BinRadiantShapesFmx_Design270.bpl (Se você não tiver esse BPL ou caminho para Radiant Shapes, certifique-se de que instalou a partir do GetIt e pode executar o instalador manualmente C:UsersPublicDocumentsEmbarcaderoStudio21.0CatalogRepositoryRadiantShapes-270-1.2InstallerRadiantShapes.exe)
ou qualquer pacote de tempo de design de que você precisa. Isso instalará os componentes no IDE.
Muitos projetos têm pacotes de tempo de design e tempo de execução. Um pacote de tempo de design contém as informações necessárias para instalar no IDE e quaisquer designers especiais, enquanto os pacotes de tempo de execução contêm apenas o código necessário para uso durante o tempo de execução. Você pode, opcionalmente, até mesmo enviar esses pacotes com seu binário para vinculá-los em tempo de execução.
Em seguida, você precisa informar ao IDE onde encontrar as DCUs e, opcionalmente, os arquivos de origem. E se você tiver apenas arquivos de origem? Não tem problema, abra e construa todos os pacotes pelo menos no modo de lançamento em cada plataforma que a biblioteca suporta. Em seguida, vá para Tools then Options e Language 🡆 Delphi 🡆 Library .
Em seguida, preencha os detalhes de cada plataforma que você construiu e deseja oferecer suporte:
Plataforma selecionada – especifica para qual plataforma você está fornecendo detalhes:
Linux 64 bits, iOS 64 bits, Win 32 bits, Win 64 bits, macOS 64 bits, Android 32 bits, Android 64 bits e / ou simulador iOS.
Caminho da biblioteca – este é o caminho para as DCUs de lançamento. Algumas pessoas apontam para seus arquivos PAS aqui, o que funciona, mas então você acaba recompilando a biblioteca mais do que o necessário.
Radiant Shapes inclui todas as DCUs em subpastas fora do caminho C:Program Files (x86)RaizeRadiantShapes1.4Lib
Dica : cole o novo caminho na caixa de edição antes de clicar no botão de navegação, se precisar navegar até uma subpasta. Em seguida, certifique-se de clicar em [Adicionar] quando terminar.
Diálogo de caminhos da bibliotecaLocalização das pastas DCU específicas da plataforma para Radiant Shapes C: Arquivos de programas (x86) Raize RadiantShapes 1.4 Lib
Caminho de navegação é onde você opcionalmente adiciona um caminho para os arquivos PAS de origem. Isso permite que você navegue até esses arquivos de origem do IDE com o item de menu de contexto Encontrar declaração .
Para Radiant Shapes, a fonte é encontrada em C:Program Files (x86)RaizeRadiantShapes1.4Source
Caminho de depuração DCU permite que você opcionalmente aponte para a versão de depuração das DCUs. Isso é útil se a versão de depuração tiver informações adicionais ou comportamentos diferentes.
Radiant Shapes não possui DCUs de depuração especiais, portanto, não precisamos adicionar nada aqui.
Depois de concluir essas configurações para cada plataforma, você está pronto para começar! Boa instalação!
A veces es necesario instalar componentes manualmente. Quizás el instalador no se actualizó para su versión de Delphi o es una biblioteca de código abierto sin un instalador. Cualquiera sea la razón, aquí hay una breve guía además de lo que se encuentra en DocWiki sobre el tema.
Voy a escribir esta guía sobre la instalación del Radiant Shapes Pack disponible a través de GetIt. Supongo que aún no se actualizó para instalarlo en 10.4, y mientras I + D está trabajando en eso, esta es una gran oportunidad para aprender a instalarlo manualmente.
Después de instalar desde GetIt, no lo encontrará en el IDE y no se encuentra en la lista de paquetes, a la que accede desde Componente 🡆 Instalar paquetes mientras no hay ningún proyecto abierto
Aquí es donde se enumeran todos los paquetes BPL . Haga clic en el botón Agregar y busque la BPL
C:Program Files (x86)RaizeRadiantShapes1.4BinRadiantShapesFmx_Design270.bpl (Si no tiene ese BPL o ruta para Radiant Shapes, asegúrese de instalar desde GetIt y puede ejecutar el instalador manualmente desde C:UsersPublicDocumentsEmbarcaderoStudio21.0CatalogRepositoryRadiantShapes-270-1.2InstallerRadiantShapes.exe)
o cualquier paquete de tiempo de diseño que necesite. Esto instalará los componentes en el IDE.
Muchos proyectos tienen paquetes de tiempo de diseño y tiempo de ejecución. Un paquete en tiempo de diseño contiene la información necesaria para instalar en el IDE y cualquier diseñador especial, mientras que los paquetes RunTime solo contienen el código necesario para su uso durante RunTime. Opcionalmente, incluso puede enviar estos paquetes con su binario para vincularlos en tiempo de ejecución.
A continuación, debe indicarle al IDE dónde encontrar las DCU y, opcionalmente, los archivos de origen. ¿Qué pasa si solo tiene archivos fuente? No hay problema, abra y compile todos los paquetes al menos en modo de lanzamiento en cada plataforma que admita la biblioteca. Luego dirígete a Herramientas 🡆 Opciones y luego Idioma 🡆 Delphi 🡆 Biblioteca .
Luego, complete los detalles de cada plataforma que creó y desea admitir:
Plataforma seleccionada : especifica la plataforma para la que proporciona detalles a continuación:
Linux de 64 bits, iOS de 64 bits, Win de 32 bits, Win de 64 bits, macOS de 64 bits, Android de 32 bits, Android de 64 bits y / o simulador de iOS.
Ruta de la biblioteca : esta es la ruta a las DCU de versión. Algunas personas señalan sus archivos PAS aquí, lo que funciona, pero luego terminas compilando la biblioteca más de lo necesario.
Radiant Shapes incluye todas las DCU en subcarpetas fuera de la ruta C:Program Files (x86)RaizeRadiantShapes1.4Lib
Consejo : Pegue la nueva ruta en el cuadro de edición antes de hacer clic en el botón Examinar si necesita buscar una subcarpeta. Luego, asegúrese de hacer clic en [Agregar] cuando haya terminado.
Diálogo Rutas de bibliotecaUbicación de las carpetas DCU específicas de la plataforma para Radiant Shapes C: Archivos de programa (x86) Raize RadiantShapes 1.4 Lib
Ruta de navegación es donde, opcionalmente, agrega una ruta a los archivos PAS de origen. Esto le permite buscar esos archivos de origen desde el IDE con el elemento del menú contextual Buscar declaración .
Para Radiant Shapes, la fuente se encuentra en C:Program Files (x86)RaizeRadiantShapes1.4Source
Debug DCU Path le permite apuntar opcionalmente a la versión de depuración de las DCU. Esto es útil si la versión de depuración tiene información adicional o comportamientos diferentes.
Radiant Shapes no tiene DCU de depuración especiales, por lo que no es necesario agregar nada aquí.
Una vez que haya completado esta configuración para cada plataforma, ¡estará listo! ¡Feliz instalación!
Sind Sie Lehrer, Erzieher oder Elternteil eines Kindes, das sich für Programmierung interessiert? Suchen Sie eine einfache, aber leistungsstarke Sprache und eine schülerfreundliche integrierte Entwicklungsumgebung? Mit seiner Kombination aus vollsprachigen Funktionen, einfacher Syntax und visueller Drag-and-Drop-Entwicklung rockt Delphi seine Konkurrenz. Immer noch unsicher? Lassen Sie sich von Victory Fernandes , einem ehemaligen Professor für Mathematik und Informatik an der brasilianischen Unifacs University, in seinem Vortrag Delphi an der Universität – Einblicke für Schüler und Lehrer durch die Vorteile von Delphi in der Bildung führen . Diese Präsentation und Q & A-Sitzung ist kostenlos und wird mit neun weiteren Vorträgen und vier Panels von Branchenfachleuten gebündelt. Melden Sie sich jetzt an, indem Sie auf delphicon.embarcadero.com auf die Schaltfläche „Sitzplatz speichern“ klicken.
Вы учитель, педагог или родитель ребенка, интересующегося программированием? Вы ищете простой, но мощный язык и удобную для студентов интегрированную среду разработки? Благодаря сочетанию полноязычных функций, простого синтаксиса и визуальной разработки методом перетаскивания Delphi составляет конкуренцию. Все еще не уверены? Позвольте Виктори Фернандесу, бывшему профессору математики и информатики в бразильском университете Unifacs, рассказать вам о преимуществах Delphi в образовании в своем выступлении Delphi в университете — Insights for Student and Teachers. Эта презентация и сессия вопросов и ответов бесплатны и сопровождаются девятью другими докладами и четырьмя панелями профессионалов отрасли. Зарегистрируйтесь сейчас, нажав кнопку «Сохранить мое место» на сайте delphicon.embarcadero.com.
Você é professor, educador ou pai de uma criança interessada em programação? Você está procurando uma linguagem simples, mas poderosa, e um ambiente de desenvolvimento integrado amigável ao aluno? Com sua combinação de recursos de linguagem completa, sintaxe simples e desenvolvimento visual de arrastar e soltar, o Delphi arrasa com sua concorrência. Ainda não tem certeza? Deixe Victory Fernandes, ex-professora de matemática e ciência da computação da Universidade Unifacs do Brasil, mostrar os benefícios da Delphi na educação em sua palestra Delphi na Universidade – Insights para alunos e professores. Esta apresentação e sessão de perguntas e respostas são gratuitas e acompanhadas por nove outras palestras e quatro painéis por profissionais da indústria. Inscreva-se agora clicando no botão “Salvar meu assento” em delphicon.embarcadero.com.
¿Eres profesor, educador o padre de un niño interesado en la programación? ¿Está buscando un lenguaje simple pero poderoso y un entorno de desarrollo integrado amigable para los estudiantes? Con su combinación de características de lenguaje completo, sintaxis simple y desarrollo visual de arrastrar y soltar, Delphi es una gran competencia. ¿Aún no estás seguro? Deje que Victory Fernandes, ex profesora de matemáticas y ciencias de la computación en la Universidad Unifacs de Brasil, lo guíe a través de los beneficios de Delphi en la educación en su charla Delphi en la Universidad – Perspectivas para estudiantes y profesores. Esta presentación y sesión de preguntas y respuestas es gratuita y se combina con otras nueve charlas y cuatro paneles de profesionales de la industria. Regístrese ahora haciendo clic en el botón “Guardar mi asiento” en delphicon.embarcadero.com.
Haben Sie jemals auf Ihrem Mobilgerät auf eine Website zugegriffen und festgestellt, dass diese für den Desktop formatiert und auf einem 5-Zoll-Bildschirm nahezu unlesbar ist? Ähnliche Probleme treten bei Benutzern mit hochauflösenden Bildschirmen auf. Da sich 4K-Bildschirme vermehren und der Verbraucherdruck für 8K zunimmt, ist es wichtig, die Benutzeroberflächen anzupassen, um zu verhindern, dass Formulare und Steuerelemente auf hochauflösenden Monitoren unlesbar klein werden. RAD Studio 10.3 Rio und Rio Update 2 haben erweiterte Steuerelemente für hochauflösende Anwendungen eingeführt, um dieses Problem zu beheben. Ray Konopka von Raize Software, Inc. ist hier, um uns beizubringen, wie wir ihre Vorteile maximieren können. Nur noch sieben Tage, Nutzung von High DPI in VCL-Anwendungen ist ein Muss für alle Entwickler, Hobbyisten und RAD Studio-Enthusiasten, die nach neuen Techniken suchen, um in unserer sich verändernden Softwarelandschaft relevant zu bleiben.
DelphiCon 2020 biedt tien lezingen en vier expertpanels door technische partners van Embarcadero en Most Valuable Professionals, die het scala aan software omvat, van onderwijs tot industriële databasetoegang. Kom voor de High-DPI-kennis en vertrek met een beter begrip van Delphi-webapplicaties. De conferentie is gratis en open voor het publiek. Schrijf u nu in door op de knop „Mijn stoel opslaan“ te klikken op delphicon.embarcadero.com!
User management is an important part of any web application where users can create their account and manage. The users are allowed to register their account and login to access their account. The users are also managed by administrator to allow certain roles or update users info.
So if you’re looking for solution to build secure user management system then you’re here the right place. In this tutorial, you will learn how to create secure user management system with PHP and MySQL. You would also like to checkout Login and Registration System with PHP & MySQL to implement user login and registration.
We will implement functionality to manage user operations like user registration, user email verification, login, password reset and edit profile. We will also create Admin panel to manage users at admin end to create new user, edit existing users details and delete user.
We will cover this tutorial in easy steps with live example to manage users from front-end and administrator end.
So let’s start implementing user management system with PHP and MySQL. Before we begin, take a look on files structure for this example.
User Login and Registration features:
User registration with email verification.
User Login with remember password.
Forget password & reset password.
User profile.
User profile edit & save.
Admin Panel features:
Admin login.
Admin password reset.
Admin profile.
Dashboard with users stats.
Users list.
Add new user with role.
Edit & save user.
Delete user.
So let’s start implementing user management system with PHP and MySQL. Before we begin, take a look on files structure for this example.
index.php: User dashboard
register.php: Handle User registration.
verify.php: Complete user registration after email verification.
login.php: Handle user login.
forget_password.php: Handle user forget password reset.
reset_password.php: Reset new password.
account.php: User profile.
edit_account.php: edit user profile.
User.php: Class which hold user methods.
There will be following files for Admin section to manage users.
index.php: Handle user login
dashboard.php: Display users stats
change_password.php: Change admin password.
profile.php: Display admin profile.
user_list.php: Display user list, add new user, edit and delete user.
Step1: Create MySQL Database Table
First we will create MySQL database table user to store user details to manage users.
CREATE TABLE `user` (
`id` int(11) NOT NULL,
`first_name` varchar(50) NOT NULL,
`last_name` varchar(50) NOT NULL,
`email` varchar(50) NOT NULL,
`password` varchar(50) NOT NULL,
`gender` enum('male','female') CHARACTER SET utf8 NOT NULL,
`mobile` varchar(50) NOT NULL,
`designation` varchar(50) NOT NULL,
`image` varchar(250) NOT NULL,
`type` varchar(250) NOT NULL DEFAULT 'general',
`status` enum('active','pending','deleted','') NOT NULL DEFAULT 'pending',
`authtoken` varchar(250) NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
Step2: Implement User Registration
We will design user registration form in register.php file and handle user registration on form submit.
In class User.php, we will create method register() to implement user registration. We will send an email verification email to user’s email address with link to verify and complete registration.
public function register(){
$message = '';
if(!empty($_POST["register"]) && $_POST["email"] !='') {
$sqlQuery = "SELECT * FROM ".$this->userTable."
WHERE email='".$_POST["email"]."'";
$result = mysqli_query($this->dbConnect, $sqlQuery);
$isUserExist = mysqli_num_rows($result);
if($isUserExist) {
$message = "User already exist with this email address.";
} else {
$authtoken = $this->getAuthtoken($_POST["email"]);
$insertQuery = "INSERT INTO ".$this->userTable."(first_name, last_name, email, password, authtoken)
VALUES ('".$_POST["firstname"]."', '".$_POST["lastname"]."', '".$_POST["email"]."', '".md5($_POST["passwd"])."', '".$authtoken."')";
$userSaved = mysqli_query($this->dbConnect, $insertQuery);
if($userSaved) {
$link = "<a href='http://example.com/user-management-system/verify.php?authtoken=".$authtoken."'>Verify Email</a>";
$toEmail = $_POST["email"];
$subject = "Verify email to complete registration";
$msg = "Hi there, click on this ".$link." to verify email to complete registration.";
$msg = wordwrap($msg,70);
$headers = "From: info@webdamn.com";
if(mail($toEmail, $subject, $msg, $headers)) {
$message = "Verification email send to your email address. Please check email and verify to complete registration.";
}
} else {
$message = "User register request failed.";
}
}
}
return $message;
}
We will create a method verifyRegister() in class User.php to verify user email to complete registration.
public function verifyRegister(){
$verifyStatus = 0;
if(!empty($_GET["authtoken"]) && $_GET["authtoken"] != '') {
$sqlQuery = "SELECT * FROM ".$this->userTable."
WHERE authtoken='".$_GET["authtoken"]."'";
$resultSet = mysqli_query($this->dbConnect, $sqlQuery);
$isValid = mysqli_num_rows($resultSet);
if($isValid){
$userDetails = mysqli_fetch_assoc($resultSet);
$authtoken = $this->getAuthtoken($userDetails['email']);
if($authtoken == $_GET["authtoken"]) {
$updateQuery = "UPDATE ".$this->userTable." SET status = 'active'
WHERE id='".$userDetails['id']."'";
$isUpdated = mysqli_query($this->dbConnect, $updateQuery);
if($isUpdated) {
$verifyStatus = 1;
}
}
}
}
return $verifyStatus;
}
Step3: Implement User Login
We will design user login form in login.php file and handle login functionality on form submit.
We will create object of user class and call user method login() to complete user login.
include('class/User.php');
$user = new User();
$message = $user->login();
Step4: Implement User Password Reset
We will design user password reset form in reset_password.php file to user password. The user email is entered and submitted in forget_password.php and a password reset email will be send to user email address. When user click on password reset link, it will redirect user to reset password form and ask the user to enter new password to save.
We will design HTML in user_list.php to display users list in Datatable. We will design modal to add and edit user. Als handle muser delete functionality.
Embarcadero hat gerade einen neuen Patch für RAD Studio 10.4.1 veröffentlicht. Dies umfasst Verbesserungen des Delphi-Compilers und des Delphi-LSP. Der Patch ist in GetIt verfügbar, und auf der Begrüßungsseite der RAD Studio-IDE sollte die Verfügbarkeit angegeben werden. Der Patch wird auch im Download-Portal der Kunden von my.embarcadero.com verfügbar sein. Lesen Sie weiter, um mehr über diesen Patch und die beiden GetIt-Pakete zu erfahren.
Delphi Compiler und Code Completion Patch
This patch addresses two issues in the Delphi 10.4.1 compiler: a data layout issue with specific alignments, logged in Quality Portal as RSP-30890 and RSP-30787, and a performance issue when recompiling, logged as RSP-22074, RSP-30714, and RSP-30627. The performance improvement provided in this patch also helps with performance for Code Insight when using the LSP server.
The patch comes in two packages. The first includes updated compilers for all platforms available in Delphi and RAD Studio Professional. The second package includes the Linux compiler and it is available only for Enterprise customers. Delphi and RAD Studio Enterprise customers should see and install both packages (the order doesn’t really matter, as they are independent).
Jedes dieser GetIt-Pakete ist ein verzögertes Paket. Dies bedeutet, dass Sie es auswählen. Der eigentliche Download und die Installation erfolgen jedoch, wenn Sie die RAD Studio-IDE schließen, da sie die von der IDE verwendeten Dateien ersetzt. Befolgen Sie einfach die Schritte, warten Sie, bis die GetItCmd-Konsolen-App den Vorgang ausgeführt hat, und beachten Sie, dass die in Ihren RAD Studio-Installationsordnern ersetzten Dateien in ein spezielles Sicherungsverzeichnis unter dem Hauptinstallationsverzeichnis kopiert werden.
Im folgenden Screenshot sehen Sie einen der Schritte des automatischen Installationsprozesses:
Nach der verzögerten Installation wird RAD Studio neu gestartet und der Patch wird im Dialogfeld „GetIt Package Manager“ und auf der Begrüßungsseite als installiert angezeigt. Unternehmensbenutzer müssen beide Pakete installieren, damit die Benachrichtigung über die Begrüßungsseite nicht mehr angezeigt wird.
DevOps ist ein Begriff, den ich während eines Kundengesprächs immer häufiger höre, und ich teile häufig die verschiedenen Arten, wie Delphi, C ++ Builder und RAD Studio-Programmierung DevOps unterstützen. (Lesen Sie weiter – kostenlose Infografik unten)
Der Begriff DevOps stammt aus der Zeit um 2008/09, als die beiden Welten Entwicklung und Betrieb traditionell Stereotypen als Dev V Ops waren, mit typischen Austauschen wie „Es sind nicht meine Maschinen, es ist Ihr Code!“. – „Nein, es ist nicht mein Code, es sind deine Maschinen!“ Diese Stereotypen wurden aus Reibung aufgebaut, die in den wichtigsten Geschäftsanforderungen für fast jedes System zu finden ist – Änderungen vornehmen! Die Fähigkeit, schnell Änderungen vorzunehmen, ist wichtig, wenn Sie der Konkurrenz einen Schritt voraus sein möchten.
RAD-Entwickler sind es gewohnt, agil zu entwickeln und Änderungen schnell zu erstellen. Um sie jedoch bereitzustellen, sind Vorgänge erforderlich. Das Vornehmen von Änderungen birgt für das Operations-Team ein hohes Ausfallrisiko. Dies verhindert jedoch, dass Innovationen rechtzeitig vorangetrieben werden. Dieser Konflikt ist etwas, das DevOps anerkennt und versucht, durch neue Arbeitsweisen, die beide Seiten näher zusammenbringen, zusammenzubrechen.
Im Laufe der Jahre mussten die beiden Welten Dev und Ops anfangen, einander ähnlicher zu denken, einschließlich der Frage, wie man Feedback von Live-Umgebungen erhält, um Probleme zu finden, die im Code erscheinen. Agile und DevOps verwenden und erweitern sie gerne wieder. Anstatt einen vollständigen Kommentar dazu abzugeben, wie RAD Studio heute Delphi- und C ++ Builder-Entwickler unterstützt, möchte ich es einfach halten und sagen, dass die Bibliotheken, Komponenten, Toolchains (und mehr) ) in RAD Studio und unserem breiteren Partner-Ökosystem bieten sowohl Entwicklern als auch Betriebsteams umfassende Unterstützung, die kommunizieren und teilen müssen, was vor Ort geschieht.
Die komponentenbasierte Architektur und die plattformübergreifenden Bibliotheken, die systemübergreifend arbeiten, sind der perfekte Grundstein für eine schnelle, agile Entwicklung, die einfach unterstützt werden kann. Aber schließlich sagt ein Bild mehr als tausend Worte. Dies ist nur eine Momentaufnahme des RAD-Ökosystems, es gibt zu viel, um hier alles anzulegen, aber ich hoffe, dies gibt einen Eindruck von nur der Spitze des Eisbergs und wie die RAD Studio-IDE sowie Delphi und C + + Builder-Sprachen und -Bibliotheken ermöglichen Entwicklungsteams auf der ganzen Welt, DevOps heute zu unterstützen.
Embarcadero только что выпустил новый патч для RAD Studio 10.4.1. Сюда входят улучшения компилятора Delphi и улучшения Delphi LSP. Патч доступен в GetIt, и на приветственной странице RAD Studio IDE должна быть указана его доступность. Патч также будет доступен на портале загрузки для клиентов my.embarcadero.com. Читайте дальше, чтобы узнать больше об этом патче и двух пакетах GetIt для его доставки.
Компилятор Delphi и патч завершения кода
Этот патч устраняет две проблемы в компиляторе Delphi 10.4.1: проблему компоновки данных с определенными выравниваниями, регистрируемую в портале качества как RSP-30890 и RSP-30787, и проблему производительности при перекомпиляции, регистрируемую как RSP-22074, RSP-30714 , и RSP-30627. Улучшение производительности, представленное в этом патче, также помогает повысить производительность Code Insight при использовании LSP-сервера.
Патч поставляется в двух пакетах. Первый включает обновленные компиляторы для всех платформ, доступных в Delphi и RAD Studio Professional. Второй пакет включает компилятор Linux и доступен только для корпоративных клиентов. Клиенты Delphi и RAD Studio Enterprise должны увидеть и установить оба пакета (порядок не имеет значения, поскольку они независимы).
Каждый из этих пакетов GetIt является отложенным пакетом, что означает, что вы выбираете его, но фактическая загрузка и установка происходит, когда вы закрываете RAD Studio IDE, поскольку он заменяет файлы, используемые IDE. Просто следуйте инструкциям, подождите, пока консольное приложение GetItCmd выполнит процесс, и обратите внимание, что файлы, замененные в ваших установочных папках RAD Studio, копируются в специальный каталог резервных копий в основном месте установки.
На снимке экрана ниже показан один из этапов автоматической установки:
После отложенной установки RAD Studio перезагрузится, и исправление будет отображаться как установленное в диалоговом окне GetIt Package Manager и на странице приветствия. Корпоративным пользователям необходимо будет установить оба пакета, чтобы уведомление о приветственной странице исчезло.
A Embarcadero acaba de lançar um novo patch para RAD Studio 10.4.1. Isso inclui melhorias no compilador Delphi e melhorias no Delphi LSP. O patch está disponível no GetIt e a página de boas-vindas do RAD Studio IDE deve indicar sua disponibilidade. O patch também estará disponível no portal de download de clientes my.embarcadero.com. Continue lendo para aprender mais sobre este patch e os dois pacotes GetIt para distribuí-lo.
Compilador Delphi e patch de conclusão de código
Este patch aborda dois problemas no compilador Delphi 10.4.1: um problema de layout de dados com alinhamentos específicos, registrado no Portal da Qualidade como RSP-30890 e RSP-30787, e um problema de desempenho ao recompilar, registrado como RSP-22074, RSP-30714 e RSP-30627. A melhoria de desempenho fornecida neste patch também ajuda no desempenho do Code Insight ao usar o servidor LSP.
O patch vem em dois pacotes. O primeiro inclui compiladores atualizados para todas as plataformas disponíveis em Delphi e RAD Studio Professional. O segundo pacote inclui o compilador Linux e está disponível apenas para clientes Enterprise. Os clientes Delphi e RAD Studio Enterprise devem ver e instalar os dois pacotes (o pedido realmente não importa, pois eles são independentes).
Cada um desses pacotes GetIt é um pacote adiado, o que significa que você o seleciona, mas o download e a instalação reais ocorrem quando você fecha o RAD Studio IDE, pois ele substitui os arquivos usados pelo IDE. Basta seguir as etapas, esperar que o aplicativo de console GetItCmd execute o processo e observe que os arquivos substituídos nas pastas de instalação do RAD Studio são copiados em um diretório de backup especial no local de instalação principal.
Veja a captura de tela abaixo para uma das etapas do processo de instalação automática:
Após a instalação adiada, o RAD Studio será reinicializado e o patch aparecerá como instalado na caixa de diálogo Gerenciador de pacotes GetIt e na página de boas-vindas. Os usuários corporativos terão que instalar os dois pacotes para ver a notificação da página de boas-vindas desaparecer.
Embarcadero acaba de lanzar un nuevo parche para RAD Studio 10.4.1. Esto incluye mejoras del compilador de Delphi y mejoras de Delphi LSP. El parche está disponible en GetIt y la página de bienvenida de RAD Studio IDE debe indicar su disponibilidad. El parche también estará disponible en el portal de descargas de clientes my.embarcadero.com. Siga leyendo para obtener más información sobre este parche y los dos paquetes GetIt para entregarlo.
Compilador Delphi y parche de finalización de código
Este parche aborda dos problemas en el compilador de Delphi 10.4.1: un problema de diseño de datos con alineaciones específicas, registrado en Quality Portal como RSP-30890 y RSP-30787, y un problema de rendimiento al recompilar, registrado como RSP-22074, RSP-30714 y RSP-30627. La mejora de rendimiento proporcionada en este parche también ayuda con el rendimiento de Code Insight cuando se usa el servidor LSP.
El parche viene en dos paquetes. El primero incluye compiladores actualizados para todas las plataformas disponibles en Delphi y RAD Studio Professional. El segundo paquete incluye el compilador de Linux y está disponible solo para clientes empresariales. Los clientes de Delphi y RAD Studio Enterprise deben ver e instalar ambos paquetes (el orden realmente no importa, ya que son independientes).
Cada uno de estos paquetes GetIt es un paquete diferido, lo que significa que lo selecciona, pero la descarga e instalación reales se llevan a cabo cuando cierra el IDE de RAD Studio, ya que reemplaza los archivos que usa el IDE. Simplemente siga los pasos, espere a que la aplicación de consola GetItCmd realice el proceso y observe que los archivos reemplazados en las carpetas de instalación de RAD Studio se copian en un directorio de respaldo especial en la ubicación de instalación principal.
Vea la captura de pantalla a continuación para ver uno de los pasos del proceso de instalación automática:
Después de la instalación diferida, RAD Studio se reiniciará y el parche se mostrará como instalado en el cuadro de diálogo GetIt Package Manager y en la página de bienvenida. Los usuarios empresariales tendrán que instalar ambos paquetes para que desaparezca la notificación de la página de bienvenida.
DevOps — это термин, который я слышу все чаще и чаще во время разговоров с клиентами, и я часто рассказываю о различных способах, которыми программирование Delphi, C ++ Builder и RAD Studio поддерживает DevOps. (Продолжайте читать — бесплатная инфографика ниже)
Термин DevOps восходит к 2008/9 году, когда два мира разработки и эксплуатации были традиционно стереотипными, как Dev V Ops, с типичными обменами типа «Это не мои машины, это ваш код!» — «Нет, это не мой код, это твои машины!». Эти стереотипы были построены на трении, обнаруженном в ключевом бизнес-требовании практически для любой системы — внесении изменений! Способность быстро вносить изменения важна, если вы хотите опережать конкурентов.
Разработчики RAD привыкли к гибкой разработке и могут быстро вносить изменения, однако для их развертывания необходимы операции. Для группы эксплуатации внесение изменений сопряжено с высоким риском сбоя, но это препятствует своевременному внедрению инноваций. DevOps признает этот конфликт и пытается разрешить его с помощью новых методов работы, которые сближают обе стороны.
За прошедшие годы двум мирам Dev и Ops пришлось начать думать больше друг о друге, в том числе о том, как получить обратную связь от реальных сред, чтобы найти проблемы, которые появляются в коде. Agile и DevOps любят повторно использовать и расширять, поэтому вместо того, чтобы давать развернутые комментарии о том, как RAD Studio поддерживает разработчиков Delphi и C ++ Builder сегодня, позвольте мне упростить и сказать, что библиотеки, компоненты, инструментальные средства (и многое другое ), присутствующие в RAD Studio, и нашей более широкой партнерской экосистеме, обеспечивают широкую поддержку как разработчикам, так и командам эксплуатации, которым необходимо общаться и делиться тем, что происходит на местах.
Компонентная архитектура и кроссплатформенные библиотеки, которые работают в нескольких системах, являются идеальным фундаментом для быстрой гибкой разработки, которую можно легко поддерживать. Но в конце концов картина рисует тысячу слов. Это всего лишь моментальный снимок экосистемы RAD, здесь слишком много, чтобы все на нем разместить, но я надеюсь, что это дает представление только о верхушке айсберга и о том, как RAD Studio IDE, Delphi и C + + Языки и библиотеки Builder позволяют командам разработчиков по всему миру поддерживать DevOps уже сегодня.
DevOps é um termo que ouço cada vez mais durante as conversas com os clientes e frequentemente compartilho as diferentes maneiras como a programação do Delphi, C ++ Builder e RAD Studio oferece suporte ao DevOps. (Continue lendo – infográfico gratuito abaixo)
O termo DevOps se originou por volta de 2008/9, quando os dois mundos de Desenvolvimento e Operações eram tradicionalmente estereótipos como Dev V Ops, com trocas típicas como “Não são minhas máquinas, é o seu código!” – “Não, não é meu código, são suas máquinas!”. Esses estereótipos foram construídos a partir do atrito encontrado nos principais requisitos de negócios para quase todos os sistemas – fazer mudanças! A capacidade de fazer alterações rapidamente é importante se você quiser ficar à frente da concorrência.
Os desenvolvedores RAD estão acostumados com o desenvolvimento ágil e sendo capazes de criar mudanças rapidamente, no entanto, colocá-las em implantação precisa de Operações. Para a equipe de operações, fazer alterações traz um alto risco de interrupção, mas isso impede que as inovações sejam implementadas em tempo hábil. Esse conflito é algo que o DevOps reconhece e tenta eliminar por meio de novas formas de trabalho que aproximam os dois lados.
Ao longo dos anos, os dois mundos de Dev e Ops tiveram que começar a pensar mais como um ao outro, incluindo, como obter feedback de ambientes ativos para encontrar problemas que aparecem no código. Agile e DevOps gostam de reutilizar e expandir, então, em vez de fornecer um comentário completo sobre como o RAD Studio oferece suporte aos desenvolvedores Delphi e C ++ Builder hoje, deixe-me mantê-lo simples e dizer que as bibliotecas, componentes, conjuntos de ferramentas (e mais ) encontrados no RAD Studio, e em nosso ecossistema de parceiros mais amplo, fornecem amplo suporte para equipes de desenvolvedores e operações que precisam se comunicar e compartilhar o que está acontecendo no campo.
A arquitetura baseada em componentes e as bibliotecas de plataforma cruzada que funcionam em vários sistemas são a pedra fundamental perfeita para um desenvolvimento ágil rápido, que pode ser suportado de forma simples. Mas, finalmente, uma imagem vale mais que mil palavras. Este é apenas um instantâneo do ecossistema RAD, há muito para colocar tudo aqui, mas espero que isso dê um gostinho de apenas a ponta do iceberg e como o RAD Studio IDE, e o Delphi e C + + Linguagens e bibliotecas de builder estão permitindo que equipes de desenvolvimento em todo o mundo ofereçam suporte ao DevOps hoje.
DevOps es un término que escucho cada vez más en las conversaciones con los clientes y, a menudo, comparto las diferentes formas en que Delphi, C ++ Builder y RAD Studio dan soporte a la programación de DevOps. (Sigue leyendo – infografía gratuita a continuación)
El término DevOps se remonta aproximadamente a 2008/9, cuando los dos mundos de Desarrollo y Operaciones eran tradicionalmente estereotipos como Dev V Ops, con intercambios típicos como “¡No son mis máquinas, es tu código!” – “No, no es mi código, son tus máquinas”. Estos estereotipos surgieron de la fricción que se encuentra en los requisitos comerciales más importantes para casi cualquier sistema: ¡hacer cambios! La capacidad de realizar cambios rápidamente es importante si desea mantenerse por delante de la competencia.
Los desarrolladores de RAD están acostumbrados al desarrollo ágil y pueden realizar cambios rápidamente, pero su implementación requiere Operaciones. Para el equipo de operaciones, implementar cambios conlleva un alto riesgo de interrupciones, pero esto evita que las innovaciones se impulsen de manera oportuna. Este conflicto es algo que DevOps reconoce y trata de superar con nuevas formas de trabajo que acerquen a ambas partes.
A lo largo de los años, los dos mundos de Dev y Ops han tenido que pensar más como el otro, incluido cómo obtener comentarios de entornos en vivo para encontrar problemas que ocurren en el código. A Agile y DevOps les encanta reutilizar y expandirse, por lo que en lugar de comentar por completo cómo RAD Studio es compatible con los desarrolladores de Delphi y C ++ Builder en la actualidad, quiero mantenerlo simple y decir las bibliotecas, componentes, cadenas de herramientas (y más) que se encuentran en RAD Studio y nuestro ecosistema de socios más amplio, brindan un amplio soporte tanto a los desarrolladores como a los equipos de operaciones que necesitan comunicarse y compartir lo que está sucediendo en el campo.
La arquitectura basada en componentes y las bibliotecas multiplataforma que funcionan en varios sistemas proporcionan la base perfecta para un desarrollo rápido y flexible que se puede admitir fácilmente. Pero al final una imagen dice más que mil palabras. Esta es solo una instantánea del ecosistema RAD, hay demasiado para poner todo aquí, pero espero que esto le dé una idea de la punta del iceberg, y cómo el IDE de RAD Studio, y Delphi y C ++ Los lenguajes y bibliotecas de constructores permiten a los equipos de desarrollo de todo el mundo admitir DevOps hoy.
Vor einigen Tagen haben wir die Installationsdateien für die RAD Server Installation optimiert. Einer Installation steht damit auch unter neueren Windows Versionen (mit dem korrespondierendem IIS 10; Internet Information Server, der WebServer von Microsoft) nichts mehr im Wege. Man sollte nur einige Punkte beachten.
Eine RAD Studio (Delphi / C++Builder) IDE (installiert, um Zugriff auf das GetIt Paket zu haben; dazu gleich mehr)
Ein Windows System, auf dem der IIS installiert werden soll (hier: Ein separates Windows Server 2019)
Eine RAD Server Seriennummer
Internetzugang auf dem Produktionsrechner
Anleitung
Zuerst installiert man auf dem Zielsystem den IIS. Innerhalb von Windows 2019 über den Server-Manager: Während der Installation (mit den Grundpaketen des IIS; also inkl IIS Verwaltungskonsole) müssen auch die ISAPI-Erweiterungen installiert werden (findet man während der Installation unter Rollendienste / Anwendungsentwicklung):
Man kann kurz den IIS testen, ob dieser läuft: Standardwebsite sollte das bekannte Bild liefern:
Danach besorgt man sich im RAD Studio (Delphi und/oder C++Builder) die Installationspakete für den RAD Server. Diese befinden sich im GetIt Paketmanager (Tools | GetIt-Package-Manager). Unter dem Stickwort (Suchfeld) „RAD Server“ sollte folgendes erscheinen:
Hier lädt man sich das Installerpaket herunter. Dieses Installerpaket beinhaltet zwei Dinge: Die eigentliche RAD Server Installation (für IIS, Apache; inklusive den notwendigen BPLs, DLLs für eine Grundinstallation) und ein InterBase 2020 (für die Konfigurationsdaten des RAD Servers). Nach dem Download des Pakets „RAD Server Installer for Windows 1.0“ öffnet sich der Windows (Datei) Explorer und zeigt die heruntergeladenen Dateien an (Verzeichnis C:\Users\<UserName>\Documents\Embarcadero\Studio\21.0\CatalogRepository\RADServerInstallerforWindows-104-1.0):
NB: Lassen Sie sich nicht von der Versionsnummer „1.0“ irritieren. Dieses Paket wurde in letzter Zeit häufiger/regelmäßig aktualisiert. Stand heute: Dateidatum 22. August 2020
Aus diesem Verzeichnis kopiert man (per FTP, File-Sharing, USB Stick, …..) die zwei Dateien auf das Zielsystem:
RADServer.exe
InterBase_2020_Windows.zip
Diese beiden Dateien beinhalten alles Notwendige für die Installation auf dem Windows Server und sollen auf dem Zielsystem in ein gemeinsames Verzeichnis kopiert werden. Hier auf dem Desktop:
Man startet die RADServer.exe. Diese Installation ist relativ simpel durchzuführen. Einige Punkte gibt es aber zu beachten:
Minimal muss „RAD Server DB (InterBase 2020)“ und „RAD Server“ installiert werden. Eine vollständige Installation ist zu empfehlen
Die Architektur (32/64 Bit) sollte man auf 64 Bit auswählen (der IIS unterstützt zwar auch das Ausführen von 32 Bit ISAPI-DLLs; dies erfordert aber Konfigurationsaufwand)
Webserver: Hier IIS (logisch)
Standardverzeichnis: C:\inetpub\RADServer
Sitenamen radserver / radconsole (spiegelt den Teil der URL wieder, über den man seine Endpoints erreicht)
Sitename (für die Konfiguration innerhalb der IIS Verwaltung: RAD Server / Port 80)
Der Port 80 kollidiert natürlich erstmal mit der Standardsite vom IIS. Dazu später mehr
Die Installation installiert nun einen InterBase 2020 Server, der aktiviert werden muss. Dies sollte man unbedingt schon jetzt durchführen (die Installationsroutine „RADServer.exe“ konfiguriert auch alle notwendigen Einstellungen für die EMSServer.ini, den InterBase Server etc. Dazu ist es notwendig, daß der InterBase Server auch schon läuft!)
Im Hintergrund sieht man den Registrierungsexperten, wo man seinen RASD Server aktiviert/lizensiert. Ich wiederhole es gerne: Hier einen RAD Server Key angeben; kein RAD Studio, kein Delphi, keinen normalen InterBase Key:
Registrieren -> Seriennummer angeben
Erfolg:
Den Registrierungsexperten schliessen („OK“)
Hier kann man dann beherzt „Continue“ auswählen….
Während der weiteren Installation kommen zwei Fehlermeldungen / Hinweise, die aber nicht dramatisch sind:
Hier hat man eine schöne Möglichkeit (man soll ja das Positive sehen), um zu überprüfen, ob der InterBase Server läuft. Über den „InterBase Server Manager“ aus dem Start-Menü von Windows sollte dieser als „Running“ dargestellt werden. Die Fehlermeldung bestätigt man mit „Yes to All“.
Für alle, die es interessiert (Optional): Bevor man die Installation abschliesst, finden sich die ausgeführten PowerScript Befehle im %TEMP% Verzeichnis. Hier ist es das Verzeichnis C:\Users\Administrator\AppData\Local\Temp\I1604922298\InstallerData\Disk1\InstData\Resource1.zip\$IA_PROJECT_DIR$). Interessant sind hier zwei Dateien (für IIS): RADServer_IIS_Config.ps1 und RADConsole_IIS_Config.ps1. Ich hab diese hier mal dargestellt.
#Parameters from command line
param (
[string]$SiteName,
[string]$Port,
[string]$RootPath,
[string]$SiteDirectory,
[string]$Architecture,
[string]$INIFilePath,
[string]$SelectedFeatures
)
function Set-OrAddIniValue
{
Param(
[string]$FilePath,
[hashtable]$keyValueList
)
$content = Get-Content $FilePath
$keyValueList.GetEnumerator() | ForEach-Object {
if ($content -match "^$($_.Key)=")
{
$content= $content -replace "^$($_.Key)=(.*)", "$($_.Key)=$($_.Value)"
}
else
{
$content += "$($_.Key)=$($_.Value)"
}
}
$content | Set-Content $FilePath
}
#Variables used for creating iis modules
$DriveSitePath = $SiteDirectory
$AppPool = $RootPath
$IISEMSPath = 'IIS:\sites\' + $SiteName + '\' + $RootPath
$DriveEMSPath = $DriveSitePath + '\' + $RootPath
$DLLPath = $DriveEMSPath + "\EMSServer.dll"
$WebAppName = $RootPath
$INIFile = $INIFilePath
function Set-SwaggerValue
{
Param(
[string]$FilePath
)
$content = Get-Content $FilePath
$replaceContent = 'url: "/'+$RootPath+'/EMSServer.dll/API/APIDoc.json",'
$content= $content -replace 'url: "../API/APIDoc.json",', $replaceContent
$content | Set-Content $FilePath
}
#needed for apppool check
import-module webadministration
#check if website exists with the same site name
if(!(Test-Path IIS:\AppPools\$AppPool))
{
#app pool doesn't exist
if ((Test-Path $IISEMSPath) -and ($DriveSitePath.Contains('inetpub\wwwroot')))
{
$AppPool = "DefaultAppPool"
}
else
{
#website doesn't exist
#Create an application pool and website
New-WebAppPool -Name $AppPool
New-WebSite -Name $SiteName -Port $Port -PhysicalPath $DriveSitePath -ApplicationPool $AppPool
}
}
# Allow EMS website to override server-wide handler configurations
Set-WebConfiguration //System.webServer/handlers -metadata overrideMode -value Allow -PSPath IIS:/ -verbose
# Allow execute permissions for the EMS handler
Set-WebConfiguration -filter "/system.webServer/handlers/@AccessPolicy" -PSPath $IISEMSPath -value "Read, Script, Execute" -verbose
# Set up the EMS handler
Set-WebHandler -Name "ISAPI-dll" -Path "*.dll" -PSPath $IISEMSPath -RequiredAccess Execute -ScriptProcessor $DLLPath -ResourceType File -Modules IsapiModule -Verb '*'
# Create the EMS web app
New-WebApplication -Name $WebAppName -Site $SiteName -PhysicalPath $DriveEMSPath -ApplicationPool $AppPool -force
# Add exception to ISAPI and CGI Restrictions
Add-WebConfiguration -pspath 'MACHINE/WEBROOT/APPHOST' -filter "system.webServer/security/isapiCgiRestriction" -value @{description='EMSServerRestriction';path=$DLLPath;allowed='True'}
#check if 32 bit mode was selected
if ($Architecture -contains '32Bit')
{
# Enable 32-bit applications
set-itemProperty IIS:\apppools\$AppPool -name "enable32BitAppOnWin64" -Value "true"
}
else
{
set-itemProperty IIS:\apppools\$AppPool -name "enable32BitAppOnWin64" -Value "false"
}
if ($SelectedFeatures -match 'SUI')
{
$SiteDirectory = $SiteDirectory.Replace('\', '\\')
$SwaggeruiDir=$SiteDirectory+'\\swaggerui'
$PublicPaths='{"path": "swaggerui", "directory": "' + $SwaggeruiDir + '", "default": "index.html", "extensions": ["js", "html", "css", "map", "png"], "charset": "utf-8"}'
Set-OrAddIniValue -FilePath $INIFile -keyValueList @{
swaggerui=$PublicPaths
}
$SwaggeruiDir= $SwaggeruiDir.Replace("\\", "\") + "\index.html"
Set-SwaggerValue -FilePath $SwaggeruiDir
}
RADConsole_IIS_Config.ps1
#Parameters from command line
param (
[string]$SiteName,
[string]$Port,
[string]$RootPath,
[string]$SiteDirectory,
[string]$Architecture,
[string]$INIFilePath,
[string]$SelectedFeatures
)
#This is used to search the ini file and update it to include the correct location for the resources folder
function Set-OrAddIniValue
{
Param(
[string]$FilePath,
[hashtable]$keyValueList
)
$content = Get-Content $FilePath
$keyValueList.GetEnumerator() | ForEach-Object {
if ($content -match "^$($_.Key)=")
{
$content= $content -replace "^$($_.Key)=(.*)", "$($_.Key)=$($_.Value)"
}
else
{
$content += "$($_.Key)=$($_.Value)"
}
}
$content | Set-Content $FilePath
}
#Variables used for creating iis modules
$DriveSitePath = $SiteDirectory
$AppPool = $RootPath
$IISEMSPath = 'IIS:\sites\' + $SiteName + '\' + $RootPath
$DriveEMSPath = $DriveSitePath + '\' + $RootPath
$DLLPath = $DriveEMSPath + "\EMSConsole.dll"
$WebAppName = $RootPath
$INIFile = $INIFilePath
#needed for apppool check
import-module webadministration
#check if website exists with the same site name
if(!(Test-Path IIS:\AppPools\$AppPool))
{
#app pool doesn't exist
if ((Test-Path $IISEMSPath) -and ($DriveSitePath.Contains('inetpub\wwwroot')))
{
$AppPool = "DefaultAppPool"
}
else
{
#website doesn't exist
#Create an application pool and website
New-WebAppPool -Name $AppPool
New-WebSite -Name $SiteName -Port $Port -PhysicalPath $DriveSitePath -ApplicationPool $AppPool
}
}
# Allow EMS website to override server-wide handler configurations
Set-WebConfiguration //System.webServer/handlers -metadata overrideMode -value Allow -PSPath IIS:/ -verbose
# Allow execute permissions for the EMS handler
Set-WebConfiguration -filter "/system.webServer/handlers/@AccessPolicy" -PSPath $IISEMSPath -value "Read, Script, Execute" -verbose
# Set up the EMS handler
Set-WebHandler -Name "ISAPI-dll" -Path "*.dll" -PSPath $IISEMSPath -RequiredAccess Execute -ScriptProcessor $DLLPath -ResourceType File -Modules IsapiModule -Verb '*'
# Create the EMS web app
New-WebApplication -Name $WebAppName -Site $SiteName -PhysicalPath $DriveEMSPath -ApplicationPool $AppPool -force
# Add exception to ISAPI and CGI Restrictions
Add-WebConfiguration -pspath 'MACHINE/WEBROOT/APPHOST' -filter "system.webServer/security/isapiCgiRestriction" -value @{description='EMSConsoleRestriction';path=$DLLPath;allowed='True'}
#check if 32 bit mode was selected
if ($Architecture -contains '32Bit')
{
# Enable 32-bit applications
set-itemProperty IIS:\apppools\$AppPool -name "enable32BitAppOnWin64" -Value "true"
}
else
{
set-itemProperty IIS:\apppools\$AppPool -name "enable32BitAppOnWin64" -Value "false"
}
# Optional: Allow CORS
Add-WebConfigurationProperty //system.webServer/httpProtocol/customHeaders $IISEMSPath -AtIndex 0 -Name collection -Value @{name='Access-Control-Allow-Origin';value='*'}
#Modify emserver.ini to include resources directory
Set-OrAddIniValue -FilePath $INIFile -keyValueList @{
ResourcesFiles=$DriveEMSPath
}
if ($SelectedFeatures -match 'SUI')
{
if ($SelectedFeatures -match 'RS')
{
$SiteDirectory = $SiteDirectory.Replace('\', '\\')
$SwaggeruiDir=$SiteDirectory+'\\swaggerui'
$PublicPaths='{"path": "swaggerui", "directory": "' + $SwaggeruiDir + '", "default": "index.html", "extensions": ["js", "html", "css", "map", "png"], "charset": "utf-8"}'
Set-OrAddIniValue -FilePath $INIFile -keyValueList @{
swaggerui=$PublicPaths
}
}
}
#Copy the ini file from the public documents directory to the radconsole directory
#Copy-Item $INIFile -Destination $DriveEMSPath
Nach dem endgültigem Abschluss der Installation sollte der Server schon (fast!) laufen
Wir erinnern uns (wenn so angegeben): Die Site läuft auf dem Port 80, wo auch die Standardsite vom IIS läuft. Wir schalten also die Standardsite ab (je nach Wunsch / Konfiguration) und starten die „RAD Server Site“: Das geht im IIS Manager. Hier also die Standardsite („Default Web Site“) stoppen und dafür die „RAD Server“ Site starten:
Anschliessend sollte folgender Aufruf funktionieren (Internet Explorer; auf dem Zielsystem direkt):
http://localhost/radserver/emsserver.dll
Oder auch
http://localhost/radserver/emsserver.dll/version
Der Internet Explorer weiß aber nix von JSON Dateien:
Auch das kann man ihm abgewöhnen (optional!). Eine Datei mit der Endung REG:
Windows Registry Editor Version 5.00
;
; Tell IE to open JSON documents in the browser.
; 25336920-03F9-11cf-8FD0-00AA00686F13 is the CLSID for the "Browse in place" .
;
[HKEY_CLASSES_ROOT\MIME\Database\Content Type\application/json]
"CLSID"="{25336920-03F9-11cf-8FD0-00AA00686F13}"
"Encoding"=hex:08,00,00,00
[HKEY_CLASSES_ROOT\MIME\Database\Content Type\text/json]
"CLSID"="{25336920-03F9-11cf-8FD0-00AA00686F13}"
"Encoding"=hex:08,00,00,00
Das klappt’s auch mit dem IE:
Eigentlich war es das „schon“..
Bemerkungen:
Die RADServer.exe Installation hat auch eine EMSServer.ini Datei hier abgelegt und konfiguriert:
„C:\Users\Public\Documents\Embarcadero\EMS\emsserver.ini“
Hier sind auch die wichtigsten Konfig-Parameter richtig gesetzt
Wo die Datei liegt, steht in der Regsitry:
HKEY_LOCAL_MACHINE\SOFTWARE\Embarcadero\EMS
Eigene BPLs können natürlich irgendwo im Dateisystem stehen; brauchen aber dann eventuell auch noch einen Rattenschwanz an weiteren DLLs/BPLs
Zum Testen kann man einfach seine erste BPL (Standard-RAD-Server-Beispiel) in der EMSSERVER.INI angeben:
Abschnitt [Server.Packages]
Manchmal kommt es vor, daß die RAD Server Console (http://localhost/radconsole/emsconsole.dll) nur ein LOADING angibt:
Lösung: Einmal die „C:\inetpub\RADServer\radconsole\EMSDevConsole.exe“ aufrufen. Einloggen (consoleuser/consolepass). Dann geht’s (über den Port 8081 und dann auch über den normalen Port 80)
Anschliessend über den normalen Port 80:
School Management Systems (SMS) is a web application that commonly used in schools to manage teachers, students, classes, subjects, sections, students attendance etc.
So if you’re a PHP developer and wants to develop School Management System with PHP then you’re here at right place. In our previous tutorial you have learned how to develop online voting system with PHP and MYSQL. In this tutorial you will learn how to develop a School Management System with PHP and MySQL.
We will cover this tutorial in easy steps to develop live demo of school management systems to cover some major functionalities like manage teachers, students, classes, subjects, sections, students attendance etc. This is a very simple school management systems for learning purpose and can be enhanced according to requirement to develop a perfect advance level system. The download link is at the end of tutorial to download complete project with database tables.
So let’s start implementing School Management System with PHP and MySQL. Before we begin, take a look on files structure for this example.
index.php
School.php: A class to hold school methods.
dashboard.php
students.php
teacher.php
classes.php
subjects.php
sections.php
attendance.php
Step1: Create MySQL Database Table
First we will MySQL database tables sms_user, sms_teacher, sms_students, sms_classes, sms_subjects, sms_section and sms_attendance. All the tables structure and data is available in project download zip file.
Step2: Create User Login
In index.php file, we will create login form to implement admin login to allow access to logged in user only.
In attendance.php file, we will design HTML to search class and section student attendance and list. We will also create student attendance form to handle students attendance functionality.
In our previous tutorial you have learned how to Upload Multiple Image with jQuery, PHP & MySQL. In this tutorial you will learn how to implement Bootstrap modal form submit with jQuery and PHP.
Modal or dialog has important role in any web application. The modals allow to handle additional functionality on same page without using any extra space space.
The Bootstrap modals are very user friendly and easy to integrate. The modals can be used for different requirement like showing any specific details on same page or handling form submit to process user input.
So if you’re looking for solution to implement Bootstrap modal with form submit to process form values, then you’re here at right place. In this tutorial you will learn how to show Bootstrap form with input and handle Bootstrap form submit with jQuery and process form submit post values at server in PHP.
The tutorial explained in easy steps with live demo of Bootstrap form with submit functionality and link to download source code of live demo.
So let’s start the coding, we will have following file structure for the example to handle Bootstrap Form submit with jQuery .
index.php
contact.js
saveContact.php
Step1: Include Bootstrap and jQuery Files
First we will include Bootstrap and jQuery library files in head tag in index.php file. We will also include contact.js file in which handle form submit using jQuery.
Step2: Design Bootstrap Contact Form
In index.php file, we will design Bootstrap contact form with input fields and submit button. The modal will be opened when click on show contact form button.
Step3: Handle Bootstrap Contact Form Submit
In contact.js file, we will handle form submit jQuery .submit() function with return false to not submit directly to action. We will handle form submit values using Ajax by calling function submitForm().
Helpdesk Systems or Support Ticket Systems are commonly used systems in companies to help their customers to resolve their queries and issues. The Helpdesk Systems are used by both support team and customers to add tickets, reply to tickets and resolve issues or queries. It allow customers to add ticket with issue details and the support replies to that ticket with solutions and details.
So if you’re thinking about developing web based Helpdesk Ticketing System with PHP, then you’re here at right place. In our previous tutorial, you have learned how create User Management System with PHP & MySQL, In this tutorial, you will learn how to develop Helpdesk Ticketing System with PHP and MySQL.
We will cover this tutorial step by step with live example of Helpdesk system to create ticket, list tickets, edit ticket, close ticket, reply to ticket, view ticket with replies etc..
So let’s start implementing Helpdesk Ticketing System with PHP and MySQL. Before we begin, take a look on files structure for this example.
index.php
ticket.php
ajax.js
process.php
Users.php: A class to hold user method.
Tickets.php: A class to hold ticket method.
Step1: Create MySQL Database Table
We will create MySQL database tables to build Helpdesk system. We will create hd_users table to store user login details.
CREATE TABLE `hd_users` (
`id` int(11) NOT NULL,
`email` varchar(250) NOT NULL,
`password` varchar(250) NOT NULL,
`sign_up_date` varchar(250) NOT NULL,
`nick_name` varchar(250) NOT NULL,
`user_group` int(11) NOT NULL,
`last_login` varchar(250) NOT NULL,
`url` varchar(270) NOT NULL,
`allowed` int(11) NOT NULL,
`most_recent_ip` varchar(100) NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
We will create hd_departments table to store help team department details.
CREATE TABLE `hd_departments` (
`id` int(11) NOT NULL,
`name` varchar(50) NOT NULL,
`hidden` int(1) NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
We will create hd_tickets table to store ticket details.
CREATE TABLE `hd_tickets` (
`id` int(11) NOT NULL,
`uniqid` varchar(20) NOT NULL,
`user` int(11) NOT NULL,
`title` varchar(250) NOT NULL,
`init_msg` text NOT NULL,
`department` int(11) NOT NULL,
`date` varchar(250) NOT NULL,
`last_reply` int(11) NOT NULL,
`user_read` int(11) NOT NULL,
`admin_read` int(11) NOT NULL,
`resolved` int(11) NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
We will create hd_ticket_replies table to store ticket replies details.
CREATE TABLE `hd_ticket_replies` (
`id` int(11) NOT NULL,
`user` int(11) NOT NULL,
`text` text NOT NULL,
`ticket_id` text NOT NULL,
`date` varchar(20) NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
Step2: Create Tickets Dashboard
First we will create dashboard to display ticket listing with edit, close and view options.
<p>View and manage tickets that may have responses from support team.</p>
<table id="listTickets" class="table table-bordered table-striped">
<thead>
<tr>
<th>S/N</th>
<th>Ticket ID</th>
<th>Subject</th>
<th>Department</th>
<th>Created By</th>
<th>Created</th>
<th>Status</th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
</table>
In ajax.js file, we will make ajax request to process.php with action listTicket to load ticket list with details.
In class Tickets.php, we will create method getTicketReplies() to get ticket replies details from MySQL database table.
public function getTicketReplies($id) {
$sqlQuery = "SELECT r.id, r.text as message, r.date, u.nick_name as creater, d.name as department, u.user_group
FROM ".$this->ticketRepliesTable." r
LEFT JOIN ".$this->ticketTable." t ON r.ticket_id = t.id
LEFT JOIN hd_users u ON r.user = u.id
LEFT JOIN hd_departments d ON t.department = d.id
WHERE r.ticket_id = '".$id."'";
$result = mysqli_query($this->dbConnect, $sqlQuery);
$data= array();
while ($row = mysqli_fetch_array($result, MYSQLI_ASSOC)) {
$data[]=$row;
}
return $data;
}
Step6: Make Ticket Reply
We will create design of ticket reply form in ticket.php.
In class Tickets.php, we will create method saveTicketReplies() to save ticket replies into MySQL database table.
public function saveTicketReplies () {
if($_POST['message']) {
$date = new DateTime();
$date = $date->getTimestamp();
$queryInsert = "INSERT INTO ".$this->ticketRepliesTable." (user, text, ticket_id, date)
VALUES('".$_SESSION["userid"]."', '".$_POST['message']."', '".$_POST['ticketId']."', '".$date."')";
mysqli_query($this->dbConnect, $queryInsert);
$updateTicket = "UPDATE ".$this->ticketTable."
SET last_reply = '".$_SESSION["userid"]."', user_read = '0', admin_read = '0'
WHERE id = '".$_POST['ticketId']."'";
mysqli_query($this->dbConnect, $updateTicket);
}
}
We have also handled other functionalities related to tickets, customers and supports. To get all files, you can download complete project code to enhance it to implement and use according to your requirement.
You can view the live demo from the Demo link and can download the script from the Download link below. DemoDownload
DelphiFeeds.com wurde bereits 2005 von den Gurock-Brüdern eingeführt. Seitdem hat Gurocks TestRail- Produkt wirklich Fahrt aufgenommen und sie waren so beschäftigt, dass sie keine Zeit mehr hatten, DelphiFeeds zu warten. Es wurden weiterhin Feeds gesammelt und Überschriften ausgetauscht, es wurden jedoch keine aktualisierten Feedquellen mehr hinzugefügt. In der Zwischenzeit haben wir neue Websites wie BeginEnd.net und zuletzt DelphiMagazine.com mit aktualisierten Feed-Listen gesehen, aber DelphiFeeds war weiterhin die De-facto-Nachrichtenquelle für viele in der Community.
Während DelphiFeeds nie gestorben ist, wird es heute wiedergeboren. Läuft auf einem ganz neuen Server-Backend, allen alten Feeds und einigen aktualisierten neuen. Mit der Zeit werden neue Feedquellen hinzugefügt und aktualisiert und alte entfernt. Keiner der vorherigen Trendartikel oder Benutzerkonten wurde migriert, und die Registrierung neuer Benutzerkonten ist noch nicht aktiviert, aber all das und mehr werden in Kürze verfügbar sein.
Wenn Sie eine andere News-Site haben, die Sie bevorzugen, ist das in Ordnung, aber nicht. Dann würde ich Ihnen empfehlen, sich die neuen DelphiFeeds anzusehen und für weitere Updates und Upgrades auf dem Laufenden zu bleiben!
DelphiFeeds.com foi lançado pelos irmãos Gurock por volta de 2005. Desde então, o produto TestRail de Gurock realmente decolou, e eles estavam tão ocupados que não tinham mais tempo para manter o DelphiFeeds. Ela continuou coletando feeds e compartilhando manchetes, mas não estava mais adicionando fontes de feed atualizadas. Nesse ínterim, vimos novos sites como BeginEnd.net e mais recentemente DelphiMagazine.com com listas de feeds atualizadas, mas DelphiFeeds continuou a ser a fonte de notícias de fato para muitos na comunidade.
Embora o DelphiFeeds nunca tenha morrido, hoje ele renasce. Executando em um back-end de servidor totalmente novo, todos os feeds antigos e alguns novos atualizados. Com o tempo, novas fontes de feed serão adicionadas e atualizadas, e as antigas serão removidas. Nenhum dos artigos de tendência anteriores ou contas de usuário foram migrados e o registro de novas contas de usuários ainda não está habilitado, mas tudo isso e muito mais virão em breve.
Se você tem outro site de notícias de sua preferência, tudo bem, mas não é, então eu recomendo que você verifique o novo DelphiFeeds e fique ligado para mais atualizações e upgrades!
DelphiFeeds.com fue lanzado por los hermanos Gurock alrededor de 2005. Desde entonces, el producto TestRail de Gurock realmente despegó, y estaban tan ocupados que ya no tenían tiempo para mantener DelphiFeeds. Continuó recopilando feeds y compartiendo titulares, pero ya no estaba agregando fuentes de feeds actualizadas. Mientras tanto, hemos visto nuevos sitios como BeginEnd.net y, más recientemente, DelphiMagazine.com con listas de feeds actualizadas, pero DelphiFeeds continuó siendo la fuente de noticias de facto para muchos en la comunidad.
Si bien DelphiFeeds nunca murió, hoy renace. Se ejecuta en un servidor backend completamente nuevo, todos los feeds antiguos y algunos nuevos actualizados. Con el tiempo, se agregarán y actualizarán nuevas fuentes de alimentación, y se eliminarán las antiguas. Ninguno de los artículos de tendencias anteriores o cuentas de usuario se migró, y el registro de cuentas de nuevos usuarios aún no está habilitado, pero todo eso y más llegarán pronto.
Si tiene otro sitio de noticias que prefiere, está bien, pero no es así, entonces le recomendaría que consulte el nuevo DelphiFeeds y permanezca atento a más actualizaciones y mejoras.
In Zusammenarbeit mit Unternehmen wie Raize Software, DevExpress und Microsoft veranstaltete Embarcadero Mitte September 2020 den Desktop First UX Summit. Diese dreitägige Veranstaltung brachte mehr als ein Dutzend Embarcadero-MVPs und technische Partner zusammen, um sich mit dem Thema Desktop User Experience und UI / UX-Design zu befassen. diskutieren in einer Zeit des Mobile-First-Fiebers. Beginnend mit einer Keynote-Präsentation von Ray Konopka (Präsident, Raize Software) zum aktuellen Stand von Desktop-UX und falsch angewendetem Design für mobile Benutzeroberflächen behandelte der Gipfel eine Vielzahl von Themen, von reibungslosem und konsistentem UX-Design bis hin zur Bedeutung Klicken Sie mit der rechten Maustaste auf Kontextmenüs. Eine vollständige Liste der Referenten und ihrer Präsentationen finden Sie unter Summit.desktopfirst.com.
Um die Reichweite dieser Experten zu erhöhen und fundierte Designpraktiken zu fördern, schreibe ich eine Reihe von Beiträgen, um die Highlights jeder Präsentation und jedes Panels festzuhalten und einen Link zur vollständigen Präsentation bereitzustellen.
Der absolut schnellste Weg, um mit der Linux-Bereitstellung von Delphi aus zu beginnen, ist die Verwendung des Windows-Subsystems für Linux (WSL). Ab Windows 10 Version 2004 (Build 19041) enthält WSL2 einen vollständigen Linux-Kernel, sodass das Debuggen und alles wie erwartet funktioniert.
Installieren Sie WSL2 (Sie können Ihre Build-Nummer über das Systeminformations-Applet überprüfen, Build 19041 ist jedoch bereits seit einiger Zeit verfügbar.)
Schalttafel
Programme
Schalte Windows Funktionen ein oder aus
Windows-Subsystem für Linux
Starten Sie neu
Installieren Sie Ubuntu über den Microsoft Store – Ubuntu ohne Versionsnummer ist die aktuelle LTS-Version und wird in Zukunft aktualisiert. Es gibt andere Distributionen ( Kali , Pegwin , Alpine WSL usw.), aber alle sind etwas anders.
Starten Sie Ubuntu – Über das Startmenü oder über ein PowerShell / Terminal / CLI-Fenster mit den Befehlen WSLoder Ubuntu . Wenn Sie mehr als ein Linux installiert haben, startet WSL das Standard-Linux. Beachten Sie beim ersten Start, dass dies einige Minuten dauert, und fordern Sie dann zur Eingabe neuer Linux-Anmeldeinformationen auf.
Führen Sie SetupUbuntu4Delphi21.sh aus – Ich habe ein Skript auf gist erstellt, das alle Einstellungen für Ubuntu vornimmt, um es für das Targeting von Delphi vorzubereiten. Sie können wgetes herunterladen oder die Befehle manuell eingeben. Es erstellt eine Skriptdatei mit dem Namen pa21.sh in Ihrem Home-Ordner, um PAServer schnell zu starten. Sie können es ändern, um die Standardkonfigurationseinstellungen zu übergeben.
#!/bin/bash
echo Updating the local package directory
sudo apt update
echo Upgrade any outdated pacakges
sudo apt full-upgrade -y
echo Intall new packages necessary for Delphi
sudo apt install joe wget p7zip-full curl openssh-server build-essential zlib1g-dev libcurl4-gnutls-dev libncurses5 xorg libgl1-mesa-dev libosmesa-dev libgtk-3-bin -y
echo Clean-up unused packages
sudo apt autoremove -y
cd ~
echo Downloading LinuxPAServer for Sydney 10.4 (21.0) Update 1
wget https://altd.embarcadero.com/releases/studio/21.0/1/PAServer/LinuxPAServer21.0.tar.gz
echo Setting up directories to extract PA Server into
mkdir PAServer
mkdir PAServer/21.0
tar xvf LinuxPAServer21.0.tar.gz -C PAServer/21.0 --strip-components=1
rm LinuxPAServer21.0.tar.gz
echo #!/bin/bash >pa21.sh
echo ~/PAServer/21.0/paserver >>pa21.sh
chmod +x pa21.sh
echo -----------------------------------
echo To launch PAServer type ~/pa21.sh
echo -----------------------------------
~/pa21.sh
Führen Sie den broadwayd Server – Sie sollten bereits paserverläuft (das Skript darüber ins Leben gerufen), so dass Sie wahrscheinlich ein neues Ubuntu Terminal – Fenster , wo Sie wollen , starten können broadwayd. Ich verwende gerne das neue Windows-Terminal, da es das Öffnen mehrerer Registerkarten erleichtert und sich gut in WSL integrieren lässt.
Importieren Sie das Linux-SDK in Delphi IDE – Extras> Optionen> Bereitstellung> SDK-Manager – Die IP-Adresse für die PAServer-Instanz lautet localhost / 127.0.0.1. Denken Sie also daran, dass Sie Ports zwischen der WSL-Instanz und Ihrem Windows 10-Hostbetriebssystem gemeinsam nutzen.
Jetzt müssen Sie nur noch die meisten FireMonky-Projekte ausführen und über localhost:8080Ihren Browser eine Verbindung herstellen.
Beachten Sie, dass für jeden Client eine Instanz der App auf dem Server ausgeführt wird und jede über eine eigene Portnummer verfügt. Es gibt Möglichkeiten, dies mit ein wenig Aufwand auf dem Server zu verwalten, aber das ist ein Blog-Beitrag für einen anderen Tag.
В сотрудничестве с такими компаниями, как Raize Software, DevExpress и Microsoft, Embarcadero провела в середине сентября 2020 года саммит Desktop First UX Summit. Это трехдневное мероприятие собрало более десятка MVP Embarcadero и технических партнеров, чтобы обсудить тему взаимодействия с настольными компьютерами и дизайна UI / UX в эпоху лихорадки Mobile-First. Начавшись с основного доклада Рэя Конопки (президента Raize Software) о текущем состоянии пользовательского интерфейса для настольных ПК и неверно применяемого дизайна пользовательского интерфейса для мобильных устройств, на саммите был рассмотрен широкий спектр тем, от плавного и последовательного дизайна пользовательского интерфейса до важности контекстных меню, вызываемых правой кнопкой мыши. . Полный список докладчиков и их презентации можно найти на сайте summit.desktopfirst.com.
Чтобы расширить охват этих экспертов и продвигать методы звукового дизайна, я пишу серию сообщений, чтобы зафиксировать основные моменты каждой презентации и панели и предоставить ссылку на полную презентацию.
Em parceria com empresas como Raize Software, DevExpress e Microsoft, a Embarcadero sediou o Desktop First UX Summit em meados de setembro de 2020. Este evento de três dias reuniu mais de uma dúzia de MVPs da Embarcadero e parceiros de tecnologia para discutir o assunto Experiência do usuário de desktop e design de UI / UX em uma época de febre do Mobile First. Começando com uma apresentação de Ray Konopka (presidente, Raize Software) sobre o estado atual da experiência do usuário em desktops e design de interface do usuário móvel mal aplicado, o encontro cobriu uma ampla variedade de tópicos, desde design de experiência do usuário fluente e consistente até a importância dos menus de contexto do botão direito . Uma lista completa de palestrantes e suas apresentações pode ser encontrada em summit.desktopfirst.com.
A fim de ampliar o alcance desses especialistas e promover práticas de design de som, estou escrevendo uma série de posts para capturar os destaques de cada apresentação e painel e fornecer um link para a apresentação completa.
En asociación con empresas como Raize Software, DevExpress y Microsoft, Embarcadero organizó la Desktop First UX Summit a mediados de septiembre de 2020. Este evento de tres días reunió a más de una docena de MVP de Embarcadero y socios tecnológicos para discutir el tema de la experiencia del usuario de escritorio y el diseño de UI / UX en una era de fiebre de Mobile-First. Comenzando con una presentación principal de Ray Konopka (presidente, Raize Software) sobre el estado actual de la experiencia de usuario de escritorio y el diseño de la interfaz de usuario móvil mal aplicado, la cumbre cubrió una amplia variedad de temas, desde el diseño fluido y consistente de la experiencia de usuario hasta la importancia de los menús contextuales del botón derecho del ratón. . Puede encontrar una lista completa de oradores y sus presentaciones en summit.desktopfirst.com.
Para ampliar el alcance de estos expertos y promover prácticas de diseño sólidas, estoy escribiendo una serie de publicaciones para capturar los aspectos más destacados de cada presentación y panel y proporcionar un enlace a la presentación completa.
Самый быстрый способ начать развертывание Linux из Delphi — использовать подсистему Windows для Linux (WSL). Начиная с Windows 10 версии 2004 (сборка 19041) WSL2 включает полное ядро Linux, поэтому отладка и все работает, как ожидалось.
Установите WSL2 (номер сборки можно проверить с помощью апплета «Информация о системе», но сборка 19041 уже доступна уже некоторое время.)
Панель управления
Программ
Включение и отключение функций Windows
Подсистема Windows для Linux
перезагрузка
Установите Ubuntu через Microsoft Store — Ubuntu без версии # является текущей версией LTS и будет обновляться в будущем. Есть и другие дистрибутивы (Kali, Pegwin, Alpine WSL и т. Д.), Но все они немного отличаются.
Запустите Ubuntu — через меню «Пуск» или из окна PowerShell / Terminal / CLI с помощью команд WSL или Ubuntu. Если у вас установлено более одного Linux, WSL запускает тот, который используется по умолчанию. При первом запуске имейте в виду, что это займет несколько минут, а затем вам будет предложено ввести новые учетные данные Linux.
Запустите SetupUbuntu4Delphi21.sh — я создал сценарий в gist, который выполняет все настройки для Ubuntu, чтобы он был готов к таргетингу из Delphi. Вы можете использовать wget для загрузки или вводить команды вручную. Он создает файл сценария с именем pa21.sh в вашей домашней папке для быстрого запуска PAServer. Вы можете изменить его, чтобы передать настройки конфигурации по умолчанию.
#!/bin/bash
echo Updating the local package directory
sudo apt update
echo Upgrade any outdated pacakges
sudo apt full-upgrade -y
echo Intall new packages necessary for Delphi
sudo apt install joe wget p7zip-full curl openssh-server build-essential zlib1g-dev libcurl4-gnutls-dev libncurses5 xorg libgl1-mesa-dev libosmesa-dev libgtk-3-bin -y
echo Clean-up unused packages
sudo apt autoremove -y
cd ~
echo Downloading LinuxPAServer for Sydney 10.4 (21.0) Update 1
wget https://altd.embarcadero.com/releases/studio/21.0/1/PAServer/LinuxPAServer21.0.tar.gz
echo Setting up directories to extract PA Server into
mkdir PAServer
mkdir PAServer/21.0
tar xvf LinuxPAServer21.0.tar.gz -C PAServer/21.0 --strip-components=1
rm LinuxPAServer21.0.tar.gz
echo #!/bin/bash >pa21.sh
echo ~/PAServer/21.0/paserver >>pa21.sh
chmod +x pa21.sh
echo -----------------------------------
echo To launch PAServer type ~/pa21.sh
echo -----------------------------------
~/pa21.sh
Запустите сервер Broadwayd — у вас уже должен быть запущен paserver (его запустил сценарий, приведенный выше), поэтому вам, вероятно, понадобится новое окно терминала Ubuntu, в котором вы можете запустить Broadwayd. Мне нравится использовать новый терминал Windows, поскольку он позволяет легко открывать несколько вкладок, а WSL прекрасно интегрируется с ним.
Импортируйте Linux SDK в Delphi IDE — Инструменты> Параметры> Развертывание> Диспетчер SDK — IP-адрес для экземпляра PAServer — localhost / 127.0.0.1, поэтому имейте в виду, что вы разделяете порты между экземпляром WSL и вашей ОС Windows 10.
Теперь остается лишь запустить любой проект FireMonky и подключиться к localhost: 8080 из вашего браузера.
Следует иметь в виду, что для каждого клиента на сервере работает один экземпляр приложения, и у каждого из них есть собственный номер порта. Есть способы управлять этим на сервере с небольшими усилиями, но это сообщение в блоге на другой день.
A maneira absolutamente mais rápida de iniciar a implantação do Linux a partir do Delphi é usar o Windows Subsystem for Linux (WSL). A partir do Windows 10 Versão 2004 (Build 19041), o WSL2 inclui um kernel Linux completo, portanto, a depuração e tudo funciona conforme o esperado.
Instale o WSL2 (você pode verificar o número da sua compilação por meio do miniaplicativo de informações do sistema, mas a compilação 19041 já está disponível há algum tempo.)
Painel de controle
Programas
Liga ou desliga características das janelas
Subsistema Windows para Linux
Reiniciar
Instale o Ubuntu através da Microsoft Store – Ubuntu sem versão # é a versão LTS atual e será atualizado no futuro. Existem outras distribuições (Kali, Pegwin, Alpine WSL, etc.), mas todas são um pouco diferentes.
Inicie o Ubuntu – por meio do menu Iniciar ou de uma janela PowerShell / Terminal / CLI com os comandos WSL ou Ubuntu. Se você tiver mais de um Linux instalado, o WSL iniciará o padrão. Ao iniciá-lo pela primeira vez, saiba que levará alguns minutos e, em seguida, solicitará novas credenciais do Linux.
Execute SetupUbuntu4Delphi21.sh – Eu fiz um script no gist que faz toda a configuração do Ubuntu para deixá-lo pronto para ser direcionado a partir do Delphi. Você pode usar o wget para baixá-lo ou digitar os comandos manualmente. Ele cria um arquivo de script chamado pa21.sh em sua pasta pessoal para iniciar o PAServer rapidamente. Você pode modificá-lo para passar as configurações padrão.
#!/bin/bash
echo Updating the local package directory
sudo apt update
echo Upgrade any outdated pacakges
sudo apt full-upgrade -y
echo Intall new packages necessary for Delphi
sudo apt install joe wget p7zip-full curl openssh-server build-essential zlib1g-dev libcurl4-gnutls-dev libncurses5 xorg libgl1-mesa-dev libosmesa-dev libgtk-3-bin -y
echo Clean-up unused packages
sudo apt autoremove -y
cd ~
echo Downloading LinuxPAServer for Sydney 10.4 (21.0) Update 1
wget https://altd.embarcadero.com/releases/studio/21.0/1/PAServer/LinuxPAServer21.0.tar.gz
echo Setting up directories to extract PA Server into
mkdir PAServer
mkdir PAServer/21.0
tar xvf LinuxPAServer21.0.tar.gz -C PAServer/21.0 --strip-components=1
rm LinuxPAServer21.0.tar.gz
echo #!/bin/bash >pa21.sh
echo ~/PAServer/21.0/paserver >>pa21.sh
chmod +x pa21.sh
echo -----------------------------------
echo To launch PAServer type ~/pa21.sh
echo -----------------------------------
~/pa21.sh
Execute o servidor broadwayd – você já deve ter o paserver em execução (o script acima o lançou), então provavelmente você desejará uma nova janela de terminal do Ubuntu onde possa iniciar broadwayd. Gosto de usar o novo Terminal do Windows, pois facilita a abertura de várias guias e o WSL se integra perfeitamente a ele.
Importe o Linux SDK no Delphi IDE – Ferramentas> Opções> Implementação> SDK Manager – O endereço IP da instância PAServer é localhost / 127.0.0.1, portanto, lembre-se de que você está compartilhando portas entre a instância WSL e seu host Windows 10 OS.
Agora é apenas uma questão de executar quase qualquer projeto FireMonky e conectar-se ao localhost: 8080 do seu navegador.
Uma coisa a se ter em mente é que há uma instância do aplicativo em execução no servidor para cada cliente e cada um tem seu próprio número de porta. Existem maneiras de gerenciar isso no servidor com um pouco de esforço, mas isso é uma postagem de blog para outro dia.
La forma más rápida de comenzar con la implementación de Linux desde Delphi es usar el Subsistema de Windows para Linux (WSL). A partir de Windows 10, versión 2004 (compilación 19041), WSL2 incluye un kernel completo de Linux, por lo que la depuración y todo funciona como se esperaba.
Instale WSL2 (puede verificar su número de compilación a través del subprograma de información del sistema, pero la compilación 19041 ha estado disponible durante un tiempo).
Panel de control
Programas
Activar o desactivar las características de windows
Subsistema de Windows para Linux
Reiniciar
Instale Ubuntu a través de Microsoft Store: Ubuntu sin el número de versión es la versión LTS actual y se actualizará en el futuro. Hay otras distribuciones (Kali, Pegwin, Alpine WSL, etc.), pero todas son un poco diferentes.
Inicie Ubuntu: a través del menú Inicio o desde una ventana de PowerShell / Terminal / CLI con los comandos de WSL o Ubuntu. Si tiene más de un Linux instalado, WSL lanza el predeterminado. La primera vez que lo inicie, tenga en cuenta que tomará unos minutos y luego le pedirá nuevas credenciales de Linux.
Ejecute SetupUbuntu4Delphi21.sh: hice un script en gist que hace toda la configuración de Ubuntu para que esté listo para apuntar desde Delphi. Puede usar wget para descargarlo o puede escribir los comandos manualmente. Crea un archivo de script llamado pa21.sh en su carpeta de inicio para iniciar PAServer rápidamente. Puede modificarlo para pasar los ajustes de configuración predeterminados.
#!/bin/bash
echo Updating the local package directory
sudo apt update
echo Upgrade any outdated pacakges
sudo apt full-upgrade -y
echo Intall new packages necessary for Delphi
sudo apt install joe wget p7zip-full curl openssh-server build-essential zlib1g-dev libcurl4-gnutls-dev libncurses5 xorg libgl1-mesa-dev libosmesa-dev libgtk-3-bin -y
echo Clean-up unused packages
sudo apt autoremove -y
cd ~
echo Downloading LinuxPAServer for Sydney 10.4 (21.0) Update 1
wget https://altd.embarcadero.com/releases/studio/21.0/1/PAServer/LinuxPAServer21.0.tar.gz
echo Setting up directories to extract PA Server into
mkdir PAServer
mkdir PAServer/21.0
tar xvf LinuxPAServer21.0.tar.gz -C PAServer/21.0 --strip-components=1
rm LinuxPAServer21.0.tar.gz
echo #!/bin/bash >pa21.sh
echo ~/PAServer/21.0/paserver >>pa21.sh
chmod +x pa21.sh
echo -----------------------------------
echo To launch PAServer type ~/pa21.sh
echo -----------------------------------
~/pa21.sh
Ejecute el servidor de broadwayd: ya debería tener paserver en ejecución (el script anterior lo lanzó), por lo que probablemente querrá una nueva ventana de terminal de Ubuntu donde pueda iniciar broadwayd. Me gusta usar la nueva Terminal de Windows, ya que facilita la apertura de varias pestañas y WSL se integra muy bien con ella.
Importar Linux SDK en Delphi IDE – Herramientas> Opciones> Implementación> SDK Manager – La dirección IP para la instancia PAServer es localhost / 127.0.0.1;, así que tenga en cuenta que está compartiendo puertos entre la instancia WSL y su sistema operativo Windows 10 host.
Ahora es solo cuestión de ejecutar la mayoría de los proyectos de FireMonky y conectarse a localhost: 8080 desde su navegador.
Una cosa a tener en cuenta es que hay una instancia de la aplicación ejecutándose en el servidor para cada cliente, y cada uno tiene su propio número de puerto. Hay formas de administrar esto en el servidor con un poco de esfuerzo, pero esa es una publicación de blog para otro día.
Windows 10 wird jetzt von 3 von 4 Windows-Desktop-Computern verwendet! Dieser Marktanteil stieg 2020 um rund 10%, nachdem Windows 7 ursprünglich Ende 2017 bestanden hatte.
Windows 7 ist auf rund 18% gesunken und fällt. Zum Teil, da Unternehmen nach dem Ende von Windows 7 im Januar weiterhin auf Windows 10 umsteigen. Windows 8.1 ist stabil bei rund 4%
Was bedeutet das für mich als Entwickler?
Sie müssen mehr denn je sicherstellen, dass Ihre Anwendungen Windows 10-fähig sind. Windows 10 hat als Reaktion auf eine Reihe von Hardware-Innovationen und Verwendungsmustern grundlegende Anpassungen an der Benutzeroberfläche vorgenommen. Dies umfasst Anpassungen für die PerMonitor-Unterstützung für verschiedene Auflösungen und DPIs sowie die Verbesserungen für die HighDPI-Unterstützung.
HighDPI-Unterstützung ist kein optionales Element mehr. Ohne sie kann Ihre Anwendung auf bestimmten Bildschirmen unbrauchbar werden und die Endbenutzererfahrung wird darunter leiden.
Es gibt jedoch immer noch einen Grund für die Abwärtskompatibilität mit älteren Windows-Versionen. (Etwas, das die VCL bei der Implementierung der neuen Windows 10-Steuerelemente unterstützt – JA – Sie können es dann unter Windows 7 und Windows 8 ausführen, wenn Sie die VCL verwenden.)
Weitere Informationen zu Windows 10 und einigen der neuen Steuerelemente und Windows 10-Funktionen in RAD Studio für Delphi und C ++ Builder finden Sie in diesem Blogbeitrag zu den 5 einzigartigen Funktionen für Windows 10 .
Wenn es um die Art der Geräte auf der mobilen Seite geht, sind es mobile (50,33%) und Desktop-Geräte (47,04%), wobei Tablets nur 2,63% des Marktanteils ausmachen
Dies bedeutet, dass Android (und auch iOS) eine wichtige Plattform und ein technischer Vorteil sind, um die technischen Funktionen Ihrer Desktop-Anwendungen zu erweitern. Diese Eintrittsbarriere ist gering, da die Akzeptanz hoch ist. Dies macht es zu einem idealen Ziel, um Ihr Produktangebot zu verbessern und die Entwicklungsrendite zu maximieren.
Da mehr mobile Geräte als Desktops verwendet werden, können Mobiltelefone bei Produktinnovationen nicht ignoriert werden. Mobile Geräte bieten Entwicklern wichtige technische Funktionen. B. Kamera, Beschleunigungsmesser, Kompass usw. und in Verbindung mit Desktop-Lösungen innovative Möglichkeiten für die Datenerfassung ermöglichen.
Da die Kernsystembibliotheken in Delphi plattformübergreifend sind, können Sie Ihre mobile Entwicklung auch mithilfe einer einzigen Codebasis beschleunigen. Große Teile der Geschäftslogik können schnell von Windows auf iOS und Android übertragen werden.
Es lohnt sich auch, sich die Enterprise-Version von Delphi anzusehen, um Zugriff auf InterBase ToGo für Handys als lizenzfreie Laufzeitdatenbank zu erhalten. Die vollständige Verschlüsselung der Datenbank auf der Festplatte bietet das höchste Maß an Datensicherheit, das normalerweise für Unternehmensserver reserviert ist und sich dennoch in einer hochverteilbaren Datenbank mit geringem Platzbedarf befindet.
Wenn Sie ein Mobiltelefon neben einer lokalen Anwendung verwenden möchten (und die Daten nicht zuerst zentral für die Verarbeitung benötigt werden), ist der einzigartige Ansatz von AppTethering auf jeden Fall einen Blick wert. AppTethering vermeidet die Notwendigkeit, Daten auf einen zentralen Server zu übertragen, wodurch sie schneller werden (da die Daten lokal sind). Wenn dies von Interesse ist, schauen Sie sich auf jeden Fall diese Webinar-Wiederholung an .
Alternativ ist RAD Server eine hervorragende Möglichkeit, vorhandene Geschäftslogik als Remote-API zugänglich zu machen. Klicken Sie hier, um weitere Blogs auf RAD Server anzuzeigen
Regionale spezifische Trends
Wenn Sie sich eingehender mit regionalen Trends befassen möchten, empfehlen wir Ihnen , StatCounter zu besuchen und die interaktiven Chats zu verwenden, die von Fusion Charts (die kürzlich auch Mitglied der Idera-Gruppe wurden) unterstützt werden.
Welcome! I’m glad you’re here again for some more Dart and Flutter magic.
✨ In the previous episode of this series, we looked at Dart and went from basically zero to hero with all those types, classes and asynchrony. I hope you had enough practice on Dart because today, we’ll move forward to Flutter. Let’s get started!
Quick heads up: the “👉” emoji will compare JS and React with Dart and Flutter language examples as of now. Just like in the previous episode,, the left side will be the JS/React, and the right side will be the Dart/Flutter equivalent, e.g. console.log("hi!"); 👉 print("hello!");
What is Flutter, and why we’ll use it
Flutter and Dart are both made by Google. While Dart is a programming language, Flutter is a UI toolkit that can compile to native Android and iOS code. Flutter has experimental web and desktop app support, and it’s the native framework for building apps for Google’s Fuchsia OS.
This means that you don’t need to worry about the platform, and you can focus on the product itself. The compiled app is always native code as Dart compiles to ARM, hence providing you the best cross-platform performance you can get right now with over 60 fps.
Flutter also helps the fast development cycle with stateful hot reload, which we’ll make use of mostly in the last episode of this series.
Intro to the Flutter CLI
When building apps with Flutter, one of the main tools on your belt is the Flutter CLI. With the CLI, you can create new Flutter projects, run tests on them, build them, and run them on your simulators or emulators. The CLI is available on Windows, Linux, macOS and x64-based ChromeOS systems.
Once you have the CLI installed, you’ll also need either Android Studio, Xcode, or both, depending on your desired target platform(s).
(Flutter is also available on the web and for desktop, but they are still experimental, so this tutorial will only cover the Android and iOS related parts).
If you don’t wish to use Android Studio for development, I recommend VSCode. You can also install the Dart and Flutter plugins for Visual Studio Code.
Once you’re all set with all these new software, you should be able to run flutter doctor. This utility will check if everything is working properly on your machine. At the time of writing, Flutter printed this into the console for me:
[✓] Flutter (Channel stable, v1.17.4, on Mac OS X 10.15.4 19E287, locale en-HU)
[✓] Android toolchain - develop for Android devices (Android SDK version 29.0.2)
[✓] Xcode - develop for iOS and macOS (Xcode 11.5)
[!] Android Studio (version 3.5)
✗ Flutter plugin not installed; this adds Flutter specific functionality.
✗ Dart plugin not installed; this adds Dart specific functionality.
[✓] VS Code (version 1.46.1)
[!] Connected device
! No devices available
You should get similar results for at least for the Flutter part too. Everything else depends on your desired target platforms and your preferred IDEs like Android Studio or VS Code. If you get an X for something, check again if everything is set up properly.
Only move forward in this tutorial if everything works properly.
To create a new Flutter project, cd into your preferred working directory, and run flutter create <projectname>. The CLI will create a directory and place the project files in there. If you use VS Code on macOS with an iOS target, you can use this little snippet to speed up your development process:
# Create a new project
flutter create <projectname>
# move there
cd projectname
# open VS code editor
code .
# open iOS Simulator - be patient, it may take a while
open -a Simulator.app
# start running the app
flutter run
And boom, you’re all set! 💅
If you don’t wish to use the iOS simulator, you can always spin up your Android Studio emulator. Use Genymotion (or any other Android emulation software), or even connect a real device to your machine. This is a slower and more error-prone solution, so I recommend to only test on real devices when necessary.
Once they have booted, you can run flutter doctor again and see if Flutter sees the connected device. You should get an output something just like this:
...
[✓] Connected device (1 available)
...
If you got this output - congratulations! 🎉 You’re all set to move on with this tutorial. If, for some reason Flutter didn’t recognize your device, please go back and check everything again as you won’t be able to follow the instructions from now on.
Hello world! 🌍
If you didn’t run the magic snippet previously, run these commands now:
# Create a new project
flutter create <projectname>
# move there
cd projectname
# open VS code editor (optional if you use Studio)
code .
# start running the app
flutter run
This will spin up the Flutter development server with stateful hot reload and a lot more for you. You’ll see, that by default, Flutter creates a project with a floating action button and a counter:
Once you’re finished with playing around the counter, let’s dig into the code! 👨💻
Flutter project structure
Before we dig right into the code, let’s take a look at the project structure of our Flutter app for a moment:
├── README.md
├── android
│ └── ton of stuff going on here...
├── build
│ └── ton of stuff going on here...
├── ios
│ └── ton of stuff going on here...
├── lib
│ └── main.dart
├── pubspec.lock
├── pubspec.yaml
└── test
└── widget_test.dart
We have a few platform-specific directories: android and ios. These contain the necessary stuff for building, like the AndroidManifest, build.gradle, or your xcodeproj.
At this moment, we don’t need to modify the contents of these directories so we’ll ignore them for now. We’ll also ignore the test directory as we won’t cover testing Flutter in this series (but we may look into it later if there’s interest 👀), so that only leaves us to these:
And this is where the magic happens. Inside the lib directory, you have the main.dart: that’s where all the code lives right now. We’ll peek into it later, but let’s just have a look at the pubspec.yaml and pubspec.lock.
What are those?
Package management in Flutter - pub.dev
When building a project with JavaScript, we often use third party components, modules, packages, libraries, and frameworks so that we don’t have to reinvent the wheel. The JavaScript ecosystem has npm and yarn to provide you with all those spicy zeroes and ones, and they also handle the dependencies inside your project.
In the Dart ecosystem, this is all handled by pub.dev.
So, just a few quick facts: npm 👉 pub.dev package.json 👉 pubspec.yaml package-lock.json 👉 pubspec.lock
We’ll look into installing packages and importing them into our app in the last episode of this series, in which we’ll create a fun mini-game.
Digging into the Dart code
The only thing left from the file tree is main.dart. main is the heart of our app, it’s like the index.js of most JS-based projects. By default, when creating a project with flutter create, you’ll get a very well documented code with a StatelessWidget, a StatefulWidget, and its State.
So instead of observing the demo code line by line together, I encourage you to read the generated code and comments by yourself and come back here later.
In the next part, we’ll look into what are widgets and the build method.
We’ll learn why it is @overrided, and what’s the difference between stateful and stateless widgets. Then we’ll delete all the code from main.dart and create a Hello world app by ourselves so that you can get the hang of writing declarative UI code in Flutter.
Go ahead, read the generated code and the documentation now! 👀
In Flutter, everything is a widget!
As you have been reading the code, you may have noticed a few things. The first thing after importing Flutter is the entry method I have been talking about in the previous episode:
void main() {
runApp(MyApp());
}
And then, you could see all those classes and OOP stuff come back with the line class MyApp extends StatelessWidget.
First things first: in Flutter, everything is a widget!
Oh, and speaking of widgets. Components 👉 Widgets!
The StatelessWidget is a class from the Flutter framework, and it’s a type of widget. Another kind of widget is StatefulWidget and we’ll look into the difference between those and how to use them later.
We can create our reusable widget by extending the base class StatelessWidget with our own build method. (By the way, render in ReactJS 👉 build in Flutter). We can see that the build returns a Widget because the return type is defined, and we can see an odd keyword in the previous line: @override.
It’s needed because the StatelessWidget class has a definition for build by default, but we want to replace it (or override it) with our own implementation - hence the keyword @override. Before we dig further into the code, let’s have a peek at using widgets in Flutter:
// using a React component
<button onClick={() => console.log(‘clicked!’)}>Hi, I’m a button</button>
// using a Flutter widget
RawMaterialButton(
onPressed: () {
print("hi, i'm pressed");
},
child: Text("press me!"),
),
You can see that Flutter has a different approach with declarative UI code.
Instead of wrapping children between ><s and passing props next to the component name (e.g. <button onClick ...), everything is treated as a property. This enables Flutter to create more flexible and well-typed widgets: we’ll always know if a child is supposed to be a standalone widget or if it can accept multiple widgets as a property, for example. This will come in handy later when we’ll build layouts with Rows and Columns.
Now that we know a bit more about widgets in Flutter, let’s take a look at the generated code again:
The build method returns a MaterialApp that has a type of Widget and - unsurprisingly - comes from Flutter. This MaterialApp widget is a skeleton for your Flutter app. It contains all the routes, theme data, metadata, locales, and other app-level black magic you want to have set up. 🧙
You can see the MyHomePage class being referenced as the home screen. It also has a property, title, set up. MyHomePage is also a widget, and we can confirm that by looking at the definition of this class.
Quick tip: if you are using VSCode as your editor, hold Command and hover or click on the class reference and you’ll be directed to the code of the class.
We can see that MyHomePage extends a StatefulWidget. However, the structure of the code itself is a bit squiggly and weird. What’s this MyHomePage({Key key, this.title}) : super(key: key); syntax? Why doesn’t this widget have a build method? What’s a State? What is createState?
To answer these questions, we’ll have to look into one of the more hard-code topics in Flutter: state management.
Local state management in Flutter: StatefulWidgets
I previously talked about the two main types of widgets in Flutter: StatelessWidgets and StatefulWidgets. StatelessWidgets are pretty straightforward: a snippet of code that returns a Widget, maybe some properties are being passed around, but that’s all complexity.
However, we don’t want to write applications that just display stuff! We want to add interactivity! And most interactions come with some state, whether it’s the data stored in an input field or some basic counter somewhere in your app. And once the state is updated, we want to re-render the affected widgets in our app - so that the new data is being displayed for the user.
Think of state management in React: it has the very same purpose with the goal of being as efficient as possible. It’s no different in Flutter: we want to have some very simple widgets (or StatelessWidgets), and some widgets with a bit of complexity and interactivity (or StatefulWidgets).
Let’s dive into the code: a StatefulWidget consists of two main components:
a StatefulWidget (that is called MyHomePage in our case)
a typed State object (that is called _MyHomePageState in this example)
We’ll call these “widget” and “state” (respectively) for the sake of simplicity. The widget itself contains all the props, and a createState overridden method. As you can see, the prop is marked with a final - that’s because you cannot change the prop from within the widget. When you modify a prop of a widget, Flutter throws the current instance away and creates a brand new StatefulWidget.
Note that changing either the prop or the state will trigger a rebuild in Flutter - the key difference between the two is that changing the state can be initiated from within the widget while changing a prop is initiated by the parent widget.
Props help you pass data from parent to children. State helps you handle data change inside the children.
Now, let’s look into changing the state: inside the widget, we have a createState method that only returns the state, _MyHomePageState(). When modifying the state with the setState method, this createState method gets called and returns a new instance of your state. The old instance gets thrown away, and a new instance of your widget will be inserted into the widget tree.
(Sidenote: the widget tree is only a blueprint of your app, the element tree is the one that gets rendered for the user. It’s a bit more advanced, under-the-hood topic, so it won’t be covered in this series - however, I’ll link some video resources later on that will help you understand how Flutter works and what’s the deal with the widget tree and the element tree.)
The _MyHomePageState class has a type of State, typed with MyHomePage.
This is needed so that you can access the properties set in the MyHomePage instance with the widget keyword - for example, to access the title prop, write widget.title. Inside the state, you have an overridden build method, just like you’d see in a typical StatelessWidget. This method returns a widget that renders some nice data, both from props (widget.title) and from the state (_counter).
Notice that you don’t need to type in anything before the _counter. No this.state._counter, no State.of(context)._counter, just a plain old _counter. That’s because from the perspective of the code, this variable is declared just like any other would be:
int _counter = 0;
However, when modifying this variable, we need to wrap our code in setState, like this:
setState(() {
_counter++;
});
This will tell Flutter that “Hey! It’s time to re-render me!”.
The framework will call the previously discussed createState method; a new instance of your state gets created; built; rendered; and boom! 💥 The new data is now on-screen.
It may seem a bit complicated or seem like you have to write a lot of boilerplate code to get this running. But don’t worry! With VS Code, you can refactor any StatelessWidget into a stateful one with just one click:
And that’s it for managing your widget’s state! It may be a lot at first, but you’ll get used to it after building a few widgets.
A few notes about global state management in Flutter
Right now, we only looked at working with local state in Flutter - handling app-level, or global state is a bit more complex. There are, just like in JS, tons of solutions, ranging from the built-in InheritedWidget to a number of third-party state management libraries. Some of those may already be familiar, for example, there is RxDart and Redux, just to name a few. To learn more about the most popular solutions, and which one to choose for your project, I suggest you watch this awesome video about global state management in Flutter by Fireship.
Widgets, widgets, and widgets
I already talked about how everything is a widget in Flutter - however, I didn’t really introduce you to some of the most useful and popular widgets in Flutter, so let’s have a look at them before we move on!
Flutter has widgets for displaying texts, buttons, native controls like switches and sliders (cupertino for iOS and material for Android style widgets), layout widgets like Stack, Row, Column and more. There are literally hundreds of widgets that are available for you out of the box, and the list keeps growing.
The whole widget library can be found here in the Widget Catalog, and the Flutter team is also working on a very nice video series with new episodes being released weekly. This series is called Flutter Widget of the Week, and they introduce you to a Flutter widget, it’s use cases, show you code examples and more, in just about one minute! It’s really binge-worthy if you want to get to know some useful Flutter widgets, tips, and tricks.
As you’ll work with Flutter, you’ll explore more and more widgets, but there are some basic Flutter widgets you’ll absolutely need to build your first application. (We’ll probably use most of them in the next and last episode of this series, so stay tuned!)
First and foremost: Text.
The Text widget delivers what its name promises: you can display strings with it. You can also style or format your text and even make multiline texts. (There’s are a lot of line of text-related widgets available, covering your needs from displaying rich text fields to creating selectable texts.)
An example Text widget in Flutter:
Text('hello world!'),
Adding buttons to your Flutter app is also easy as one two three. There are numerous button-related widgets available for you ranging from RawMaterialButton to FlatButton, IconButton, and RaisedButton, and there are also specific widgets for creating FloatingActionButtons and OutlineButtons. I randomly picked 🎲 the RaisedButton for us so that we can have a peek at how easy it is to add a nice, stylish button into our app:
RaisedButton(
onPressed: () {
print(
"hi! it's me, the button, speaking via the console. over.",
);
},
child: Text("press meeeeeee"),
),
Building layouts in Flutter
When building flexible and complex layouts on the web and in React-Native, the most important tool you used was flexbox. While Flutter isn’t a web-based UI library and hence lacks flexbox, the main concept of using flexible containers with directions and whatnot is implemented and preferred in Flutter. It can be achieved by using Rows and Columns, and you can stack widgets on each other by using Stacks.
Consider the following cheatsheet I made:
Remember how I previously praised typing the props of a widget and how it’s one of the best tools in Flutter’s declarative UI pattern? The Row, Column and Stack widgets all have a children property that want an array of widgets, or [Widget]. Lucky for you, the VS Code automatically completes the code for you once you start working with these widgets:
Just hit tab to let Code complete the code for you! Maybe in the future, you won’t need to write code at all, Flutter will just suck out the app idea out of your brain and compile that - but until then, get used to hitting tab.
Let’s look at an example where we display some names underneath each other:
You can see that you create a typed list with the <Widget>[] syntax, you pass it as a prop for the Column, create some amazing widgets inside the list, and boom! The children will be displayed underneath each other. Don’t believe me? Believe this amazing screenshot. 📸
Alignment
The real power of Columns and Rows isn’t just placing stuff next to each other, just like flexbox isn’t only about flex-direction either. In Flutter, you can align the children of a Column and Row on two axes, mainAxis and crossAxis.
These two properties are contextual: whilst in a Row, the main axis would be horizontal, and the crossing axis would be vertical, it would be switched in a Column. To help you better understand this axis concept, I created a handy cheat sheet with code examples and more.
So, for example, if you want to perfectly center something, you’d want to use either the Center widget; or a Row or Column with both mainAxisAlignment and crossAxisAlignment set to .center; or a Row and Column with their mainAxisAlignments set to .center. The possibilities are basically endless with these widgets! ✨
Rendering lists (FlatLists 👉 ListViews)
Whilst thinking about possible use cases for columns, you may have wondered about creating scrollable, dynamic, reorderable, or endless lists.
While these features could be achieved by using Columns, it would take a lot of effort to do so, not even mentioning updating your list data or lazy rendering widgets when there’s a crapton of data. Lucky you, Flutter has a class for rendering lists of data, and it’s called a ListView!
There are several ways to use a ListView, but the most important ones are the ListView(...) widget and the ListView.builder method. Both of them achieve the very same functionality from the perspective of the user, but programmatically, they differ big time.
First, let’s look into the ListView(..) widget. Syntactically, they are very similar to a Column except that they lack the main and cross-axis alignment properties. To continue on with our previous example for columns when we placed names under each other, I’ll display the very same column converted into a ListView:
Tada! 🎉 Your first ListView in Flutter! When refreshing or rebuilding the app (by either pressing a small or capital R in the Flutter CLI), you’ll see the very same thing you saw previously.
However, if you try to drag it, you are now able to scroll inside the container! Note that when a Column has bigger children than its bounds, it will overflow, but a ListView will be scrollable.
ListView builder
While the ListView widget is cool and good, it may not be suitable for every use case. For example, when displaying a list of tasks in a todo app, you won’t exactly know the number of items in your list while writing the code, and it may even change over time. Sure, you are able to run .map on the data source, return widgets as results, and then spread it with the ... operator, but that obviously wouldn’t be performant, nor is it a good practice for long lists. Instead, Flutter provides us a really nice ListView builder.
Sidenote: while working with Flutter, you’ll see the word “builder” a lot. For example, in places like FutureBuilder, StreamBuilder, AnimatedBuilder, the build method, the ListView builder, and more. It’s just a fancy word for methods that return a Widget or [Widget], don’t let this word intimidate or confuse you!
So how do we work with this awesome method? First, you should have an array or list that the builder can iterate over. I’ll quickly define an array with some names in it:
final List<String> source = ["Sarah", "Mac", "Jane", "Daniel"];
And then, somewhere in your widget tree, you should be able to call the ListView.builder method, provide some properties, and you’ll be good to go:
Oh, and notice how I was able to use an arrow function, just like in JavaScript!
The itemCount parameter is not required, but it’s recommended. Flutter will be able to optimize your app better if you provide this parameter. You can also limit the maximum number of rendered items by providing a number smaller than the length of your data source.
When in doubt, you can always have a peek at the documentation of a class, method, or widget by hovering over its name in your editor:
And that sums up the layout and list-related part of this episode. We’ll look into providing “stylesheets” (or theme data) for your app, look at some basic routing (or navigation) methods, and fetch some data from the interwebs with HTTP requests.
Theming in Flutter
While building larger applications with custom UI components, you may want to create stylesheets. In Flutter, they are called Themes, and they can be used in a lot of places. For example, you can set a default app color, and then the selected texts, buttons, ripple animations, and more will follow this color. You can also set up text styles (like headings and more), and you’ll be able to access these styles across the app.
To do so, you should provide a theme property for your MaterialApp at the root level of the application. Here’s an example:
return MaterialApp(
title: 'RisingStack Flutter Demo',
theme: ThemeData(
// Define the default brightness and colors.
brightness: Brightness.light,
primaryColor: Colors.green[300],
accentColor: Colors.green,
// Define button theme
buttonTheme: ButtonThemeData(
buttonColor: Colors.green,
shape: CircleBorder(),
),
// Define the default font family
// (this won’t work as we won’t have this font asset yet)
fontFamily: 'Montserrat',
// Define the default TextTheme. Use this to specify the default
// text styling for headlines, titles, bodies of text, and more.
textTheme: TextTheme(
headline1: TextStyle(fontSize: 72.0, fontWeight: FontWeight.bold),
headline6: TextStyle(fontSize: 36.0, fontStyle: FontStyle.italic),
bodyText2: TextStyle(fontSize: 14.0, fontFamily: 'Muli'),
),
),
home: Scaffold(...),
);
These colors will be used throughout our app, and accessing the text themes is also simple as a pickle! I added a RaisedButton on top of the app so that we can see the new ButtonThemeData being applied to it:
It’s ugly and all, but it’s ours! 🍋 Applying the text style won’t be automatic, though. As we previously discussed, Flutter can’t really read your mind, so you explicitly need to tag Text widgets as a headline1 or bodyText2, for example.
To do so, you’ll use the Theme.of(context) method. This will look up the widget tree for the nearest Theme providing widget (and note that you can create custom or local themes for subparts of your app with the Theme widget!) and return that theme. Let’s look at an example:
You can see that we are accessing the theme with the Theme.of(context) method, and then we are just accessing properties like it’s an object. This is all you need to know about theming a Flutter app as it really isn’t a complex topic!
Designing mobile navigation experiences
On the web, when managing different screens of the app, we used paths (e.g. fancysite.com/registration) and routing (e.g., react-router) to handle navigating back and forth the app. In a mobile app, it works a bit differently, so I’ll first introduce you to navigation on mobile, and then we’ll look into implementing it in Flutter.
Mobile navigation differs from the web in a lot of ways. Gestures and animations play a very heavy role in structuring out the hierarchy of the app for your user. For example, when a user navigates to a new screen, and it slides in from the right side of the screen, the user will expect to be able to move back with a slide from the left. Users also don’t expect flashy loadings and empty screens when navigating - and even though there are advancements on the web in this segment (e.g. PWAs), it’s by far not the default experience when using websites.
There are also different hierarchies when designing mobile apps. The three main groups are:
Hierarchical Navigation(e.g. the Settings app on iOS)
New screens slide in from left to right. The expected behavior for navigating back is with a back button on the upper left corner and by swiping from the left edge of the screen to the right.
Flat Navigation(e.g. the Apple Music app)
The default behavior for this hierarchy is a tab bar on the bottom.
Tabs should always preserve location (e.g. if you navigate to a subscreen inside on tab one, switch to tab two and switch back to tab one, you’d expect to be on the subscreen, not on the root level screen.)
Swiping between tabs is optional. It isn’t the default behavior and it may conflict with other gestures on the screen itself - be cautious and think twice before implementing swipeable tab bars.
Custom, content-driven, or experimental navigation(Games, books and other content)
When making experimental navigation, always try to be sane with the navigation. The user should always be able to navigate back and undo stuff.
I created a handy cheat sheet for you that will remind you of the most important things when in doubt:
Also, all of these can be mixed together, and other screens like modals can be added to the stack. Always try to KISS and make sure that the user can always navigate back and undo things. Don’t try to reinvent the wheel with navigation (e.g., reverse the direction of opening up a new screen) as it will just confuse the user.
Also, always indicate where the user is in the hierarchy (e.g., with labeling buttons, app title bar, coloring the bottom bar icons, showing little dots, etc.). If you want to know more about designing mobile navigation experiences and implementing them in a way that feels natural to the user, check out Apple’s Human Interface Guideline’s related articles.
Navigation in Flutter
When routing on the web with React or React-Native, you had to depend on third-party libraries to get the dirty work done for you (e.g. react-router). Luckily, Flutter has native navigation capabilities out of the box, and they cover every need of most of the apps, and they are provided to you via the Navigator API.
The applications of this API and the possibilities to play around with navigation are endless. You can, for example, animate a widget between screens; build a bottom navigation bar or a hamburger menu; pass arguments; or send data back and forth. You can explore every navigation-related Flutter cookbook here. In this series, we’ll only look into initializing two screens, navigating between them, and sharing some widgets between them.
To get started with navigation, let’s create two widgets that we’ll use as screens and pass the first into a MaterialApp as the home property:
This was easy as a breeze. If you run this app in a simulator, you’ll see “hey! 👋” on the center of the screen. Now, inside the MaterialApp, we can define our routes:
And now, we can navigate the user to the next screen when the button is pressed. Notice that I replaced the Center with a Column with both its main and cross axises centered. This was required because I wanted to have two children underneath each other: a Text and a RaisedButton. Inside the RaisedButton, we only have to push the route to the stack and let Flutter handle the routing and animation:
Navigator.pushNamed(context, '/hi');
By default, we can navigate back to the previous screen by swiping from the left edge of the screen. This is the expected behavior, and we don’t intend to change it, so we’ll leave it as it is. If you want to add a button on the second screen to navigate back to the first screen, you can use Navigator.pop(); method.
Don’t ever push to the same screen the user is on, nor the previous screen. Always use pop when navigating backward.
This will be just enough to cover your basic navigation needs. Don’t forget, if you want to check out more advanced navigation features such as animating widgets between screens or passing data back and forth, check out the related Flutter cookbooks.
Networking, HTTP requests
Now that you can build widgets, layouts, display lists, and you can navigate between screens with Flutter, there’s only one thing left: communicating with your backend API. One of the most popular BaaS providers for mobile and Flutter is Firebase by Google. It allows you to use real-time databases, push notifications, crash reporting, app analytics, and a lot more out of the box. You can find the Flutter Firebase packages on pub.dev or you can follow this step-by-step tutorial.
If you are a more experienced developer and you have a complex project with a custom backend in mind, or if you are just genuinely looking forward to using your own selection of backend APIs - Firebase just won’t suit your needs.
Just add it into your dependency list inside the pubspec.yaml, wait until flutter pub get finishes (VSCode automatically runs it for you if it detects changes in the pubspec.yaml), and then continue reading:
dependencies:
flutter:
sdk: flutter
http: any
http is a Future-based library for making HTTP requests. To get started with it, just import it:
import 'package:http/http.dart' as http;
And then, you can start making requests with top-level methods like http.post or http.get. To help you experiment with making HTTP requests in Flutter, I have made a demo API that you can GET on. It will return some names and ages. You can access it here (https://demo-flutter-api.herokuapp.com/people).
Parsing JSON data in Flutter and Dart
After making your GET request on the API, you’ll be able to get data out of it by accessing properties like this:
void request() async {
final response =
await http.get("https://demo-flutter-api.herokuapp.com/people");
print(response.body); // => [{"name":"Leo","age":17},{"name":"Isabella","age":30},{"name":"Michael","age":23},{"name":"Sarah","age":12}]
print(json.decode(response.body)[0]["name"]); // => Leo
}
However, this solution should not be used in production. Not only it lacks automatic code completion and developer tooling, but it’s very error-prone and not really well documented. It’s just straight-up crap coding. 💩
Instead, you should always create a Dart class with the desired data structure for your response object and then process the raw body into a native Dart object. Since we are receiving an array of objects, in Dart, we’ll create a typed List with a custom class. I’ll name the class Person, and it will have two properties: a name (with a type of String) and age (int). I’ll also want to define a .fromJson constructor on it so that we can set up our class to be able to construct itself from a raw JSON string.
First, you’ll want to import dart:convert to access native JSON-related methods like a JSON encoder and decoder:
import 'dart:convert';
Create our very basic class:
class Person {
String name;
int age;
}
Extend it with a simple constructor:
Person({this.name, this.age});
And add in the .fromJson method, tagged with the factory keyword. This keyword informs the compiler that this isn’t a method on the class instance itself. Instead, it will return a new instance of our class:
Notice that I created two separate methods: a fromMap and a fromJson. The fromMap method itself does the dirty work by deconstructing the received Map. The fromJson just parses our JSON string and passes it into the fromMap factory method.
Now, we should just map over our raw response, use the .fromMap factory method, and expect everything to go just fine:
Sidenote: I didn’t use the .fromJson method because we already parsed the body before mapping over it, hence it’s unneeded right now.
There is a lot to unwrap in these few lines! First, we define a typed list and decode the response.body. Then, we map over it, and we throw in the return type <Person> to the map so that Dart will know that we expect to see a Person as a result of the map function. Then, we convert it to a List as otherwise it would be an MappedListIterable.
Rendering the parsed JSON: FutureBuilder and ListView.builder
Now that we have our app up and running with our basic backend, it’s time to render our data. We already discussed the ListView.builder API, so we’ll just work with that.
But before we get into rendering the list itself, we want to handle some state changes: the response may be undefined at the moment of rendering (because it is still loading), and we may get an error as a response. There are several great approaches to wrap your head around handling these states, but we’ll use FutureBuilder now for the sake of practicing using new Flutter widgets.
FutureBuilder is a Flutter widget that takes a Future and a builder as a property. This builder will return the widget we want to render on the different states as the Future progresses.
Note that FutureBuilder handles state changes inside the widget itself, so you can still use it in a StatelessWidget! Since the http package is Future-based, we can just use the http.get method as the Future for our FutureBuilder:
And we should also pass a builder. This builder should be able to respond to three states: loading, done and error. At first, I’ll just throw in a centered CircularProgressIndicator() to see that our app renders something:
If you run this app, you’ll see a progress indicator in the center of the screen running indefinitely. We can get the state of the response by the response.hasData property:
And now, we can be sure that nothing comes between us and processing, then rendering the data, so inside the response.hasData block, we’ll process the raw response with previously discussed parsing and mapping method, then return a ListView.builder:
And that’s it! 🎉 If you run this snippet right now, it will render four names and their corresponding ages next to them. Isn’t this amazing? It may have seemed like a lot of work for a simple list like this, but don’t forget that we created a whole-blown class, parsed JSON, and converted it into class instances, and we even handled loading and error states.
Summing it all up
Congratulations on making it this far into the course! You have learned a lot and came along a long way since we started in the previous episode.
You went from zero to hero both with Dart (types, control flow statements, data structures, OOP, and asynchrony) and Flutter (CLI, widgets, alignment, lists, themes, navigation and networking).
¡Windows 10 ahora es utilizado por 3 de cada 4 máquinas de escritorio con Windows! Esta participación de mercado aumentó alrededor del 10% en 2020, habiendo superado originalmente a Windows 7 a fines de 2017.
Windows 7 ha bajado a alrededor del 18% y está cayendo. En parte, porque las empresas continúan cambiando a Windows 10 después de que Windows 7 finalice su vida útil en enero. Windows 8.1 es estable alrededor del 4%
¿Qué significa para mí como desarrollador?
Debe asegurarse de que sus aplicaciones estén preparadas para Windows 10 más que nunca. Windows 10 ha realizado ajustes fundamentales en la capa de la interfaz de usuario como reacción a una serie de innovaciones de hardware y patrones de uso. Esto incluye ajustes para la compatibilidad con PerMonitor para diferentes resoluciones y DPI, y las mejoras relacionadas con la compatibilidad con HighDPI.
El soporte HighDPI ya no es un elemento opcional, sin él, su aplicación podría quedar inutilizable en ciertas pantallas y la experiencia del usuario final se verá afectada.
Dicho esto, todavía hay una razón para tener compatibilidad con versiones anteriores de Windows. (Algo que la VCL ayuda a respaldar con la implementación de los nuevos controles de Windows 10, SÍ, puede ejecutarlo en Windows 7 y Windows 8 si usa la VCL)
Para obtener más información sobre Windows 10 y algunos de los nuevos controles y características de Windows 10 en RAD Studio para Delphi y C ++ Builder, esta publicación de blog de 5 características únicas para Windows 10 es un buen resumen.
Cuando se trata del tipo de dispositivos en el lado móvil, es móvil (50,33%) y de escritorio (47,04%) en todo momento, y las tabletas representan solo el 2,63% de la cuota de mercado.
Significa que Android (y también iOS) son una plataforma clave y un activo técnico al que apuntar para expandir las capacidades técnicas de sus aplicaciones de escritorio. Esta barrera de entrada es baja ya que la adopción es alta. Esto lo convierte en un objetivo ideal para mejorar su oferta de productos y maximizar el rendimiento del desarrollo.
Con más dispositivos móviles en uso que computadoras de escritorio, los teléfonos móviles no pueden ignorarse cuando se trata de innovación de productos. Los dispositivos móviles ofrecen al desarrollador un conjunto diferente de capacidades técnicas clave. p.ej. Cámara, acelerómetro, brújula, etc., y cuando se combinan con las soluciones de escritorio, permiten formas innovadoras de capturar datos.
Dado que las bibliotecas centrales del sistema en Delphi son multiplataforma, significa que puede realizar un seguimiento rápido de su desarrollo móvil mediante el uso de una única base de código. Gran parte de la lógica empresarial puede pasar de Windows a iOS y Android rápidamente.
También vale la pena mirar la versión Enterprise de Delphi para obtener acceso a InterBase ToGo para dispositivos móviles como una base de datos libre de regalías en tiempo de ejecución. El cifrado completo en disco de la base de datos proporciona el más alto nivel de seguridad de datos, normalmente reservado para servidores empresariales, pero aún dentro de una base de datos de tamaño reducido y altamente distribuible.
Si está buscando usar un dispositivo móvil junto con una aplicación local (y no necesita que los datos se procesen de manera centralizada primero), entonces el enfoque único de AppTethering ciertamente vale la pena. AppTethering evita la necesidad de enviar datos a un servidor central, haciéndolo más rápido (ya que los datos son locales). Si esto te parece interesante, definitivamente echa un vistazo a esta repetición del seminario web.
Alternativamente, RAD Server es una excelente manera de tomar la lógica empresarial existente y hacerla accesible como una API remota. Haga clic para ver más blogs en RAD Server
Tendencias específicas regionales
Si desea profundizar en las tendencias regionales específicas, le sugiero que visite StatCounter y utilice los chats interactivos, impulsados por Fusion Charts (que también se convirtió recientemente en miembro del Grupo Idera)
O Windows 10 agora é usado por 3 em cada 4 computadores desktop com Windows! Esta participação de mercado aumentou cerca de 10% em 2020, tendo originalmente passado do Windows 7 no final de 2017.
O Windows 7 caiu para cerca de 18% e está caindo. Em parte, porque as empresas continuam a mudar para o Windows 10 após o Windows 7 chegar ao fim da vida útil em janeiro. O Windows 8.1 está estável em cerca de 4%
O que isso significa para mim como desenvolvedor?
Você precisa ter certeza de que seus aplicativos estão prontos para o Windows 10 mais do que nunca. O Windows 10 fez ajustes fundamentais na camada de interface do usuário em reação a uma série de inovações de hardware e padrões de uso. Isso inclui ajustes para suporte PerMonitor para diferentes resoluções e DPIs, e as melhorias em torno do suporte HighDPI.
O suporte a HighDPI não é mais um item opcional, sem ele, seu aplicativo poderia ser inutilizado em certas telas e a experiência do usuário final será prejudicada.
Dito isso, ainda há um motivo para ter compatibilidade com versões anteriores do Windows. (Algo que a VCL ajuda a suportar com a implementação dos novos controles do Windows 10 – SIM – você pode executar no Windows 7 e Windows 8 se usar a VCL)
Para saber mais sobre o Windows 10 e alguns dos novos controles e recursos do Windows 10 no RAD Studio para Delphi e C ++ Builder, esta postagem do blog 5 Recursos exclusivos para Windows 10 é um bom resumo.
Isso significa que o Android (e também o iOS) são uma plataforma importante e um ativo técnico para expandir os recursos técnicos de seus aplicativos de desktop. Essa barreira de entrada é baixa, pois a adoção é alta. Isso o torna um alvo ideal para aprimorar sua oferta de produtos e maximizar o retorno do desenvolvimento.
Com mais dispositivos móveis em uso do que desktops, os celulares não podem ser ignorados quando se trata de inovação de produto. Os dispositivos móveis oferecem ao desenvolvedor um conjunto diferente de recursos técnicos. por exemplo. Câmera, acelerômetro, bússola, etc., e quando combinados com soluções de desktop, permitem maneiras inovadoras de fazer a captura de dados.
Com as bibliotecas centrais do sistema em Delphi sendo multiplataforma, isso significa que você pode controlar rapidamente seu desenvolvimento móvel por meio do uso de uma única base de código. Grandes partes da lógica de negócios podem passar do Windows para iOS e Android rapidamente.
Também vale a pena olhar para a versão Enterprise do Delphi para obter acesso ao InterBase ToGo para celular como um banco de dados em tempo de execução livre de royalties. A criptografia completa em disco do banco de dados fornece o mais alto nível de segurança de dados, normalmente reservado para servidores corporativos, mas ainda com uma base de dados pequena e altamente distribuível.
Se você deseja usar um dispositivo móvel junto com um aplicativo local (e não precisa que os dados sejam centralizados primeiro para processamento), certamente vale a pena dar uma olhada na abordagem exclusiva do AppTethering. O AppTethering evita a necessidade de enviar dados para um servidor central, tornando-o mais rápido (já que os dados são locais). Se isso lhe parece interessante, então confira a repetição deste webinar.
Como alternativa, o RAD Server é uma ótima maneira de tornar a lógica de negócios existente e torná-la acessível como uma API remota. Clique para mais blogs no servidor RAD
Tendências Específicas Regionais
Se você quiser se aprofundar nas tendências específicas da região, sugiro visitar o StatCounter e usar os bate-papos interativos, com tecnologia Fusion Charts (que também recentemente se tornou membro do Grupo Idera)
Обновление ключевых целевых платформ Я хотел поделиться некоторыми интересными данными после недавней презентации, посвященной разработке современных Windows.
75% рабочих столов с Windows используют Windows 10!
Windows 10 теперь используется на 3 из каждых 4 настольных компьютеров с Windows! Эта доля рынка вырастет примерно на 10% в 2020 году по сравнению с Windows 7 еще в конце 2017 года.
Windows 7 упала примерно до 18% и продолжает падать. Частично по мере того, как предприятия продолжают переходить на Windows 10 после того, как Windows 7 закончится в январе. Windows 8.1 стабильна на уровне около 4%
Что это значит для меня как разработчика?
Вы должны быть уверены, что ваши приложения готовы к Windows 10 как никогда. Windows 10 внесла фундаментальные изменения в уровень пользовательского интерфейса в ответ на ряд инноваций в оборудовании и схемах использования. Это включает в себя настройки поддержки PerMonitor для различных разрешений и DPI, а также улучшения, связанные с поддержкой HighDPI.
Поддержка HighDPI больше не является необязательным элементом, без нее ваше приложение может стать непригодным для использования на определенных экранах, что ухудшит работу конечного пользователя.
Тем не менее, все еще есть причина для обеспечения обратной совместимости со старыми версиями Windows. (Что-то, что VCL помогает поддерживать с его реализацией новых элементов управления Windows 10 — ДА — вы можете запустить затем в Windows 7 и Windows 8, если вы используете VCL)
Чтобы узнать больше о Windows 10, а также о некоторых новых элементах управления и функциях Windows 10 в RAD Studio для Delphi и C ++ Builder, этот пост в блоге «5 уникальных функций для Windows 10» представляет собой хорошее резюме.
Что касается мобильных устройств, то это мобильные устройства (50,33%) и настольные компьютеры (47,04%), при этом на планшеты приходится всего 2,63% доли рынка.
Это означает, что Android (а также iOS) являются ключевой платформой и техническим активом, на который можно нацелить расширение технических возможностей ваших настольных приложений. Этот барьер для входа низкий, поскольку уровень принятия высок. Это делает его идеальной целью для расширения предложения продуктов и увеличения отдачи от разработки.
Поскольку мобильных устройств используется больше, чем настольных компьютеров, мобильные устройства нельзя игнорировать, когда дело касается инновационных продуктов. Мобильные устройства предлагают разработчику принципиально иной набор технических возможностей. например Камера, акселерометр, компас и т. Д., А также в сочетании с настольными решениями предоставляют инновационные способы сбора данных.
Поскольку основные системные библиотеки в Delphi являются кросс-платформенными, это означает, что вы можете быстро отслеживать разработку мобильных приложений, используя единую базу кода. Значительные части бизнес-логики можно быстро перенести из Windows на iOS и Android.
Также стоит взглянуть на корпоративную версию Delphi, чтобы получить доступ к InterBase ToGo для мобильных устройств в качестве базы данных, не требующей лицензионных отчислений. Полное шифрование базы данных на диске обеспечивает высочайший уровень безопасности данных, обычно зарезервированный для корпоративных серверов, но все же в пределах небольшой, хорошо распространяемой базы данных.
Если вы хотите использовать мобильный телефон вместе с локальным приложением (и вам не нужно, чтобы данные сначала шли централизованно для обработки), то уникальный подход AppTethering, безусловно, заслуживает внимания. AppTethering избавляет от необходимости отправлять данные на центральный сервер, делая его быстрее (поскольку данные являются локальными). Если это звучит интересно, обязательно посмотрите этот повтор вебинара.
В качестве альтернативы RAD Server — отличный способ взять существующую бизнес-логику и сделать ее доступной в виде удаленного API. Нажмите, чтобы увидеть больше блогов на сервере RAD
Региональные тенденции
Если вы хотите глубже вникнуть в региональные тенденции, я бы посоветовал посетить StatCounter и использовать интерактивные чаты на базе Fusion Charts (которая недавно также стала членом Idera Group).
Embarcadero hat die Docker-Konfigurationen für Linux mit PAServer und mit RAD Server (mit und ohne InterBase) aktualisiert, sodass sie die neueste Version der Installationsprogramme enthalten. Die neuen Docker-Skripte wurden im öffentlichen GitHub-Projekt veröffentlicht unter:
In jedem Repository gibt es einen bestimmten Abschnitt 10.4.1. Die gebrauchsfertigen GitHub-Skripte wurden ebenfalls konfiguriert und sind auf DockerHub an folgenden Speicherorten verfügbar:
Beachten Sie, dass das Basis-Image jedem Linux-Entwickler helfen kann, während der Fokus auf RAD Server liegt, indem PAServer in einem Container installiert wird und der PAServer-Port und die Anmeldeinformationen vollständig angepasst werden. Eine Dokumentation finden Sie in diesem Whitepaper, das die Originalversion der Docker-Skripte abdeckt und im neuen Embarcadero-Kunden-Download-Portal verfügbar ist:
Embarcadero ha actualizado las configuraciones de Docker para Linux con PAServer y con RAD Server (con y sin InterBase), para que incluyan la última versión de los instaladores. Los nuevos scripts de Docker se han publicado en el proyecto público de GitHub en:
En cada repositorio hay una sección específica de 10.4.1. Los scripts de GitHub listos para usar también se han configurado y están disponibles en DockerHub en las siguientes ubicaciones:
Tenga en cuenta que, si bien el enfoque está en RAD Server, la imagen básica puede ayudar a cualquier desarrollador de Linux, al instalar PAServer en un contenedor, con la personalización completa del puerto PAServer y la información de inicio de sesión. Para obtener documentación, consulte este documento técnico que cubre la versión original de los scripts de la ventana acoplable y está disponible en el nuevo portal de descarga de clientes de Embarcadero:
A Embarcadero atualizou as configurações do Docker para Linux com PAServer e com RAD Server (com e sem InterBase), para que incluam a última versão dos instaladores. Os novos scripts do Docker foram publicados no projeto público do GitHub em:
Em cada repositório há uma seção 10.4.1 específica. Os scripts GitHub prontos para uso também foram configurados e estão disponíveis no DockerHub nos seguintes locais:
Observe que enquanto o foco está no RAD Server, a imagem básica pode ajudar qualquer desenvolvedor Linux, instalando o PAServer em um container, com total customização da porta PAServer e informações de login. Para obter a documentação, consulte este white paper que cobre a versão original dos scripts do docker e está disponível no novo portal de download de clientes da Embarcadero:
Embarcadero обновил конфигурации Docker для Linux с помощью PAServer и RAD Server (с InterBase и без него), так что они включают последнюю версию установщиков. Новые скрипты Docker опубликованы в общедоступном проекте GitHub по адресу:
В каждом репозитории есть свой раздел 10.4.1. Готовые к использованию сценарии GitHub также были настроены, и они доступны на DockerHub в следующих местах:
Обратите внимание: хотя основное внимание уделяется RAD Server, базовый образ может помочь любому разработчику Linux, установив PAServer в контейнер с полной настройкой порта PAServer и информации для входа. Для документации, пожалуйста, обратитесь к этому техническому документу, охватывающему исходную версию скриптов докеров и доступному на новом портале загрузки для клиентов Embarcadero:
EarMaster — это комплексное приложение потребительского уровня с необычайной функциональностью, в котором используется множество различных технологий, в котором учителями музыки созданы почти 3000 уроков для начинающих и профессиональных музыкантов, играющих на любом инструменте. Несмотря на приложение технологически продвинутый бэкенд, команда EarMaster работала жесткий сделать его максимально простым и интуитивно понятным, как можно использовать. EarMaster поддерживает запись и воспроизведение звука, ввод и вывод цифрового интерфейса (MIDI) музыкальных инструментов, инструментальный звуковой сэмплер и многие другие технологии — все это сделано с использованием встроенных фреймворков iOS (AudioUnit, CoreMidi и т. Д.). Он поддерживает Windows и macOS, а также iOS. Он рисует в реальном времени кривые высоты звука на нотном стане на основе определения высоты звука микрофонных записей, обучая музыкантов всех уровней распознавать, транскрибировать и петь мелодии, гаммы, аккорды, интервалы, последовательности аккордов и ритмы.
“We choose Delphi because it allows us to create a true native iOS app, with a GUI using native components, and still share 97 percent of the source code with other platforms,”
Hans Lavdal Jakobsen, managing director and lead developer of EarMaster ApS
https://www.youtube.com/watch?v=uaJQUcgGB_M
“If you develop a multi-platform app, Delphi is simply the fastest way to go.”
Hans Lavdal Jakobsen, managing director and lead developer of EarMaster ApS
Com quase 3.000 aulas criadas por professores de música para iniciantes a músicos profissionais tocando qualquer instrumento, EarMaster é um aplicativo abrangente para o consumidor com funcionalidade extraordinária que usa uma variedade de tecnologias diferentes. Apesar do back-end tecnologicamente avançado do aplicativo, a equipe EarMaster trabalhou duro para torná-lo o mais simples e intuitivo possível de usar. EarMaster apresenta gravação e reprodução de áudio, entrada e saída de interface digital de instrumento musical (MIDI), sampler de som de instrumento e muitas outras tecnologias – tudo feito com os frameworks iOS nativos (AudioUnit, CoreMidi, etc). Suporta Windows e macOS, bem como iOS. Ele desenha curvas de tom em tempo real em uma equipe musical com base na detecção de tom de gravações de microfone, ensinando músicos de todos os níveis a reconhecer, transcrever e cantar melodias, escalas, acordes, intervalos, progressões de acordes e ritmos.
“We choose Delphi because it allows us to create a true native iOS app, with a GUI using native components, and still share 97 percent of the source code with other platforms,”
Hans Lavdal Jakobsen, managing director and lead developer of EarMaster ApS
https://www.youtube.com/watch?v=uaJQUcgGB_M
“If you develop a multi-platform app, Delphi is simply the fastest way to go.”
Hans Lavdal Jakobsen, managing director and lead developer of EarMaster ApS
Con casi 3000 lecciones creadas por profesores de música para principiantes y músicos profesionales que tocan cualquier instrumento, EarMaster es una aplicación completa para consumidores con una funcionalidad extraordinaria que utiliza una variedad de tecnologías diferentes. A pesar del backend tecnológicamente avanzado de la aplicación, el equipo de EarMaster trabajó duro para que su uso sea lo más simple e intuitivo posible. EarMaster cuenta con grabación de audio y reproducción de audio, entrada y salida de interfaz digital de instrumentos musicales (MIDI), muestreador de sonido de instrumentos y muchas otras tecnologías, todo hecho con los marcos nativos de iOS (AudioUnit, CoreMidi, etc.). Es compatible con Windows y macOS, así como con iOS. Dibuja curvas de tono en tiempo real en un pentagrama basado en la detección de tono de grabaciones de micrófono, enseñando a los músicos de todos los niveles a reconocer, transcribir y cantar melodías, escalas, acordes, intervalos, progresiones de acordes y ritmos.
“We choose Delphi because it allows us to create a true native iOS app, with a GUI using native components, and still share 97 percent of the source code with other platforms,”
Hans Lavdal Jakobsen, managing director and lead developer of EarMaster ApS
https://www.youtube.com/watch?v=uaJQUcgGB_M
“If you develop a multi-platform app, Delphi is simply the fastest way to go.”
Hans Lavdal Jakobsen, managing director and lead developer of EarMaster ApS
Mit fast 3.000 Lektionen, die von Musiklehrern für Anfänger bis hin zu professionellen Musikern, die jedes Instrument spielen, erstellt wurden, ist EarMaster eine umfassende, verbrauchergerechte App mit außergewöhnlicher Funktionalität, die eine Vielzahl von verschiedenen Technologien nutzt. Trotz des technologisch fortschrittlichen Backends der App hat das EarMaster-Team hart daran gearbeitet, die Anwendung so einfach und intuitiv wie möglich zu gestalten. EarMaster bietet Audio-Aufnahme und -Wiedergabe, MIDI-Eingang und -Ausgang, Instrumenten-Sound-Sampler und viele andere Technologien – alles mit den nativen iOS-Frameworks (AudioUnit, CoreMidi, etc.) erstellt. Es unterstützt sowohl Windows und macOS als auch iOS. Es zeichnet in Echtzeit Tonhöhenkurven auf einem Notensystem, basierend auf der Tonhöhenerkennung von Mikrofonaufnahmen, und bringt Musikern aller Niveaus bei, Melodien, Skalen, Akkorde, Intervalle, Akkordfolgen und Rhythmen zu erkennen, zu transkribieren und zu singen.
“We choose Delphi because it allows us to create a true native iOS app, with a GUI using native components, and still share 97 percent of the source code with other platforms,”
Hans Lavdal Jakobsen, managing director and lead developer of EarMaster ApS
https://www.youtube.com/watch?v=uaJQUcgGB_M
“If you develop a multi-platform app, Delphi is simply the fastest way to go.”
Hans Lavdal Jakobsen, managing director and lead developer of EarMaster ApS
Embarcadero Conference 2020 reúne mais de 700 desenvolvedores online ao vivo para um dia de muito conteúdo e solidariedade
Aconteceu nesta terça-feira (20/10) mais uma edição da Embarcadero Conference, o principal evento de desenvolvedores do ano promovido pela Embarcadero. Este ano, devido à pandemia, o evento foi 100% online e toda verba adquirida com a venda de ingressos e patrocínio será revertida em kits de higiene e cestas básicas que serão distribuídos para pessoas em situação de rua e comunidades carentes de São Paulo em uma ação conjunta com grupos de apoio, como o UMA – Um Momento de Amor (@umanasruas), que já faz um trabalho lindo e contínuo.
Foi desafiador levar a dinâmica interativa e imersiva do presencial para o meio digital. Mas todo esforço valeu a pena, o evento foi um grande sucesso! Tivemos adesão em massa de uma incrível comunidade de desenvolvedores, apaixonados pelo Delphi, pelas ferramentas Embarcadero, e, principalmente, por tecnologia. Cerca de 900 pessoas se inscreveram e mais de 700 estiveram online, acompanhando as palestras ao vivo. Além dos números oficiais, nosso alcance se multiplicou por conta do online.
Com o ótimo resultado, conseguiremos levar comida para cerca de 700 lares afetados pela pandemia e distribuiremos cerca de mil kits de higiene para as pessoas em situação de rua. A entrega dos kits de higiene já tem data para acontecer, será nos dias 15 e 29 de Novembro. Aliás, quem quiser participar da distribuição e apoiar não só esta ação, mas também outras, é só entrar em contato com a @umanasruas que será muito bem recebido!
Na próxima semana, dia 27 de outubro, vamos sortear ao vivo kits e licenças para todos os que preencheram a avaliação geral.
Plataforma exclusiva e nova trilha
Para suportar o evento, a empresa contou com um plataforma exclusiva que possibilitou estruturar trilhas paralelas e palestras simultâneas. Ao todo, foram cinco trilhas específicas de conhecimento e 45 palestras durante um dia inteirinho de muito conteúdo mão na massa e interação. Os participantes puderam interagir por meio do chat principal do evento e das salas onde ocorriam as palestras, além de conversar com os patrocinadores em seus stands virtuais.
No palco principal, o keynote foi conduzido por Jim Mckeeth, Chief Developer Advocate & Engineer, Marco Cantù, Delphi Product Manager e David Millington, C++ Product Manager que falaram sobre roadmap, planos para o futuro e responderam todas as dúvidas ao vivo! O engajamento foi enorme, muitas pessoas enviaram suas questões demonstrando interesse no futuro das soluções Embarcadero.
Os MVPs deram um show! Teve até uma demonstração sobre como capturar dados veiculares com o uso de sensores e uma aplicação Delphi que foi apresentada pelos palestrantes Sileide Campos e Samuel David (o Muka) direto de seus carros. Enquanto o Muka dirigia, os dados eram apresentados em gráficos atualizados em tempo real que apareciam em outra tela monitorada pela Sileide.
Outro destaque é que pela primeira vez tivemos uma trilha acadêmica, voltada exclusivamente para estudantes, universitários, professores e iniciantes. A trilha debateu temas como o futuro do profissional de TI, dicas práticas de desenvolvimento, como construir uma carreira de sucesso, entre outros temas relevantes para este público. Os feedbacks positivos já começaram a aparecer.
Foi uma experiência totalmente nova para todos, que certamente não substitui o olho no olho, mas trouxe muitos aprendizados e possibilitou levar calor humano para todos que se conectaram a esta causa!
Heutzutage ist Low-Code-Entwicklung angesagt. Verschiedene Forschungsgruppen, wie Gartner, setzen Sie den Low-Code Anwendungsentwicklungsplattform Markt bei ~ $ 10M Milliarden im Jahr 2019 und Projekt CAGR werden mehr als 20% von 2020 bis 2027. Im Gegensatz dazu hat sich der Markt für Entwickler – Tools weitgehend stagnierte Das Wachstum wird bestenfalls auf unter 5% geschätzt, was hauptsächlich auf die breite Verbreitung von Open Source zurückzuführen ist.
Warum ist das für Delphi-Entwickler wichtig? Lassen Sie mich zunächst einen kurzen Überblick über Low Code geben, da viele Entwickler mit dem Konzept nicht vertraut sind. Low Code ist ein Softwareentwicklungsansatz, der wenig bis gar keine Codierung erfordert, um Anwendungen und Prozesse zu erstellen. Eine Low-Code-Entwicklungsplattform verwendet visuelle Schnittstellen mit einfacher Logik und Drag-and-Drop-Funktionen anstelle umfangreicher Codierungssprachen. Niedriger Code ist kaum neu. Vor 20 Jahren zielten 4G-Skriptsprachen darauf ab, die Entwicklung zu vereinfachen, indem einfache Sprachen wie C ++ in vereinfachte Skriptsprachen abstrahiert wurden. Einige davon wurden speziell entwickelt (z. B. SAS), andere waren allgemeiner (LANSA, UNIFACE usw.). Viele der letzteren haben sich inzwischen zu Low-Code-Plattformen entwickelt.
Some of the most popular and relatively new examples of low-code platforms these days are Outsystems and Mendix. They provide visual IDEs and produce web applications that can be deployed on mobile. They have slick UIs, for sure, but what is important is that underneath they consist of Java/C# applications with JavaScript front ends. Indeed, almost always to implement complex apps, you have to go to the source code and program in these respective languages.
Das häufige Portieren dieser Teile der App in die IDE ist nicht so einfach, oder zumindest geht der Low-Code-Aspekt verloren. Beispielsweise können Sie Mendix mit Java „erweitern“.
Dies bedeutet, dass Sie zum Erstellen einer komplexen App möglicherweise plötzlich einen Java-Entwickler, einen JavaScript-Entwickler und einen visuellen OutSystems-Entwickler benötigen. Sie können sich die Auswirkungen auf die Entwicklungsgeschwindigkeit und insbesondere auf die Wartung von Anwendungen vorstellen.
Während viele Low-Codes No-Code-Ansätze versprechen, ist dies für robuste Apps, die skalierbar und leistungsfähig sind, häufig nicht praktikabel. Es ist kein Zufall, dass alle Low-Code-Plattformen auf Armeen von Beratern und professionellen Diensten angewiesen sind.
All dies erinnert Sie sicher an RAD Studio. Das Besondere an RAD Studio ist, dass Sie nahtlos von der visuellen Entwicklung zur Codierung wechseln, um die Leistung zu maximieren. Die resultierende App ist sehr leistungsfähig und letztendlich skalierbar. Wenn Sie einen Webclient erstellen möchten, können Sie natürlich durch einige Ansätze zu JavaScript geschickt werden, aber es unterscheidet sich nicht so sehr von einer „ausgefallenen“ Low-Code-Plattform.
Der Hauptvorteil von Low Code besteht darin, dass Sie weniger Entwickler benötigen und die Benutzer das System schnell erlernen können. Nun, das ist das Geheimnis von Delphi. Sie benötigen nur wenige Entwickler, und das Erlernen von Delphi ist wahrscheinlich so einfach wie das Erlernen einer dieser Low-C-Code-Plattformen. Die echten Delphi-Experten kennen Delphi. Die echten Experten für diese anderen Plattformen müssen so viel mehr wissen. Die Delphi-Community, die möglicherweise nicht so groß ist wie die von C # oder C ++, ist im Vergleich zu einem dieser Low-Code-Ansätze riesig. Schließlich und vor allem kostet RAD Studio nur einen Bruchteil der Kosten einer anderen Low-Code-Lösung.
Wenn Sie das nächste Mal gefragt werden, warum Sie RAD Studio und Delphi lieben, sagen Sie ihnen einfach: Es ist wie mit einer Low-Code-Lösung, aber viel besser!
En estos días, el desarrollo de código bajo está de moda. Varios grupos de investigación, como Gartner, situaron el mercado de la plataforma de desarrollo de aplicaciones de código bajo en ~ $ 10M mil millones en 2019 y proyectan que la CAGR será superior al 20% de 2020 a 2027. Por el contrario, el mercado de herramientas para desarrolladores se ha mantenido prácticamente plano , y se estima que el crecimiento será inferior al 5% en el mejor de los casos, impulsado en gran medida por la amplia proliferación del código abierto.
¿Por qué es esto importante para los desarrolladores de Delphi? Permítanme comenzar con una descripción general rápida del código bajo, ya que muchos desarrolladores no están familiarizados con el concepto. Low code es un enfoque de desarrollo de software que requiere poca o ninguna codificación para crear aplicaciones y procesos. Una plataforma de desarrollo de bajo código utiliza interfaces visuales con lógica simple y funciones de arrastrar y soltar en lugar de extensos lenguajes de codificación. El código bajo no es nada nuevo. Hace veinte años, los lenguajes de scripting 4G tenían como objetivo simplificar el desarrollo mediante la abstracción de lenguajes de bajo nivel, como C ++, en lenguajes de scripting más simplificados. Algunos de estos fueron construidos expresamente (por ejemplo, SAS) y otros eran más genéricos (LANSA, UNIFACE, etc.). Muchos de estos últimos ahora se han convertido en plataformas de código bajo.
Algunos de los ejemplos más populares y relativamente nuevos de plataformas de código bajo en estos días son Outsystems y Mendix. Proporcionan IDE visuales y producen aplicaciones web que se pueden implementar en dispositivos móviles. Tienen interfaces de usuario elegantes, sin duda, pero lo importante es que en el fondo consisten en aplicaciones Java / C # con interfaces de JavaScript. De hecho, casi siempre para implementar aplicaciones complejas, debe ir al código fuente y al programa en estos respectivos idiomas.
Con frecuencia, trasladar estas partes de la aplicación al IDE no es tan fácil, o al menos se pierde el aspecto de código bajo. Por ejemplo, puede “expandir” Mendix con Java.
Lo que esto significa es que, para crear una aplicación compleja, de repente puede necesitar un desarrollador de Java, un desarrollador de JavaScript y sí, un desarrollador visual de OutSystems. Puede imaginar el impacto en la velocidad del desarrollo y especialmente en el mantenimiento de las aplicaciones.
Si bien muchos códigos bajos prometen enfoques sin código, con frecuencia esto no es práctico para aplicaciones robustas que son escalables y de rendimiento. No es casualidad que todas las plataformas low-code dependan de ejércitos de consultores y servicios profesionales.
Ahora, todos estos le recuerdan con seguridad a RAD Studio. Lo bueno de RAD Studio es que puede pasar sin problemas del desarrollo visual a la codificación para maximizar el rendimiento. La aplicación resultante es de alto rendimiento y, en última instancia, escalable. Por supuesto, si desea crear un cliente web, algunos enfoques podrían enviarlo a JavaScript, pero no es tan diferente de una plataforma de código bajo “elegante”.
El principal beneficio del código bajo es que necesita menos desarrolladores y la gente puede aprender el sistema rápidamente. Bueno, ese es el secreto de Delphi. Necesita pocos desarrolladores, y aprender Delphi probablemente sea tan fácil como aprender cualquiera de estas plataformas de código C bajo. Los verdaderos expertos de Delphi conocen Delphi. Los verdaderos expertos de estas otras plataformas tienen que saber mucho más. La comunidad Delphi, que puede no ser tan grande como la de C # o C ++, es enorme en comparación con cualquiera de estos enfoques de código bajo. Finalmente, y lo que es más importante, RAD Studio tiene una fracción del costo de cualquier otra solución de código bajo.
Entonces, la próxima vez que alguien le pida que explique por qué ama RAD Studio y Delphi, dígale: ¡Es como tener una solución de código bajo, pero mucho mejor!
Atualmente, o desenvolvimento de baixo código está em voga. Vários grupos de pesquisa, como o Gartner, estimam o mercado de plataforma de desenvolvimento de aplicativos de baixo código em cerca de US $ 10 milhões em 2019 e projetam que o CAGR seja superior a 20% de 2020 a 2027. Em contraste, o mercado de ferramentas de desenvolvedor permaneceu praticamente estável , e o crescimento é estimado em menos de 5%, na melhor das hipóteses, em grande parte impulsionado pela ampla proliferação de software livre.
Por que isso é importante para desenvolvedores Delphi? Deixe-me começar com uma rápida visão geral do baixo código, já que muitos desenvolvedores não estão familiarizados com o conceito. O baixo código é uma abordagem de desenvolvimento de software que requer pouca ou nenhuma codificação para construir aplicativos e processos. Uma plataforma de desenvolvimento de baixo código usa interfaces visuais com lógica simples e recursos de arrastar e soltar em vez de linguagens de codificação extensas. O código baixo dificilmente é novo. Vinte anos atrás, as linguagens de script 4G visavam simplificar o desenvolvimento abstraindo linguagens de baixo nível, como C ++, em linguagens de script mais simplificadas. Alguns deles foram construídos especificamente (por exemplo, SAS) e outros eram mais genéricos (LANSA, UNIFACE, etc.). Muitos dos últimos agora evoluíram para plataformas de baixo código.
Alguns dos exemplos mais populares e relativamente novos de plataformas de baixo código atualmente são Outsystems e Mendix. Eles fornecem IDEs visuais e produzem aplicativos da web que podem ser implantados em dispositivos móveis. Eles têm interfaces de usuário elegantes, com certeza, mas o que é importante é que, por baixo, consistem em aplicativos Java / C # com front-ends em JavaScript. Na verdade, quase sempre para implementar aplicativos complexos, você precisa ir ao código-fonte e programar nessas respectivas linguagens.
Portar frequentemente essas partes do aplicativo para o IDE não é tão fácil, ou pelo menos o aspecto de baixo código é perdido. Por exemplo, você pode “expandir” o Mendix com Java.
O que isso significa é que, para construir um aplicativo complexo, de repente você pode precisar de um desenvolvedor Java, um desenvolvedor JavaScript e, sim, um desenvolvedor visual OutSystems. Você pode imaginar o impacto na velocidade de desenvolvimento e principalmente na manutenção de aplicativos.
Embora muitos códigos baixos prometam abordagens sem código, isso frequentemente não é prático para aplicativos robustos que são escalonáveis e de alto desempenho. Não é por acaso que todas as plataformas de baixo código dependem de exércitos de consultores e serviços profissionais.
Agora, tudo isso o lembra com certeza do RAD Studio. O que é ótimo no RAD Studio é que você muda perfeitamente do desenvolvimento visual para a codificação para maximizar o desempenho. O aplicativo resultante é de alto desempenho e, em última análise, escalonável. Claro, se você deseja construir um cliente da web, algumas abordagens podem levá-lo ao JavaScript, mas não é tão diferente de uma plataforma de baixo código “sofisticada”.
O principal benefício do baixo código é que você precisa de menos desenvolvedores e as pessoas podem aprender o sistema rapidamente. Bem, esse é o segredo da Delphi. Você precisa de poucos desenvolvedores, e aprender Delphi é provavelmente tão fácil quanto aprender qualquer uma dessas plataformas de baixo código de c. Os verdadeiros especialistas da Delphi conhecem a Delphi. Os verdadeiros especialistas dessas outras plataformas precisam saber muito mais. A comunidade Delphi, que pode não ser tão grande quanto a do C # ou C ++, é vasta se comparada a qualquer uma dessas abordagens de baixo código. Por fim e mais importante, o RAD Studio custa uma fração do custo de qualquer outra solução de baixo código.
Portanto, da próxima vez que alguém lhe pedir para explicar por que você adora o RAD Studio e o Delphi, apenas diga: é como ter uma solução de baixo código, mas muito melhor!
В наши дни в моде разработка low-code. Различные исследовательские группы, такие как Gartner, оценивают рынок платформ для разработки приложений с низким кодом в ~ 10 миллионов долларов в 2019 году и прогнозируют, что CAGR превысит 20% в период с 2020 по 2027 год. Напротив, рынок инструментов для разработчиков в основном остается неизменным. , а рост, по оценкам, в лучшем случае составляет менее 5%, в основном за счет широкого распространения открытого кода.
Почему это важно для разработчиков Delphi? Позвольте мне сначала начать с краткого обзора low-кода, поскольку многие разработчики не знакомы с этой концепцией. Низкий код — это подход к разработке программного обеспечения, который практически не требует программирования для создания приложений и процессов. Платформа разработки с низким кодом использует визуальные интерфейсы с простой логикой и функциями перетаскивания вместо обширных языков программирования. Низкий код вряд ли что-то новое. Двадцать лет назад языки сценариев 4G были нацелены на упрощение разработки за счет абстрагирования низкоуровневых языков, таких как C ++, в более упрощенные языки сценариев. Некоторые из них были специально созданы (например, SAS), а другие носили более общий характер (LANSA, UNIFACE и т. Д.). Многие из последних теперь превратились в платформы с низким кодом.
Некоторые из самых популярных и относительно новых примеров платформ с низким кодом в наши дни — это Outsystems и Mendix. Они предоставляют визуальные IDE и создают веб-приложения, которые можно развертывать на мобильных устройствах. У них, конечно, красивые пользовательские интерфейсы, но важно то, что они состоят из приложений Java / C # с интерфейсом JavaScript. Действительно, почти всегда, чтобы реализовать сложные приложения, вам нужно обратиться к исходному коду и программе на этих языках.
Часто портировать эти части приложения в среду IDE не так просто, по крайней мере теряется аспект низкого кода. Например, вы можете «расширить» Mendix с помощью Java.
Это означает, что для создания сложного приложения вам внезапно могут понадобиться Java-разработчик, JavaScript-разработчик и да, визуальный разработчик OutSystems. Вы можете себе представить, как это повлияет на скорость разработки и особенно обслуживания приложений.
Хотя многие низкие коды обещают подходы без кода, это часто непрактично для надежных приложений, которые являются масштабируемыми и производительными. Не случайно все low-code платформы полагаются на армии консультантов и профессиональные услуги.
Теперь все это наверняка напоминает о RAD Studio. Что замечательно в RAD Studio, так это то, что вы легко переходите от Visual Development к программированию, чтобы добиться максимальной производительности. Полученное приложение является высокопроизводительным и масштабируемым. Конечно, если вы хотите создать веб-клиент, некоторые подходы могут направить вас на JavaScript, но он не так уж отличается от «причудливой» платформы с низким кодом.
Основное преимущество низкого кода заключается в том, что вам нужно меньше разработчиков, и люди могут быстро изучить систему. Что ж, в этом секрет Delphi. Вам нужно немного разработчиков, а изучить Delphi, вероятно, так же просто, как изучить любую из этих платформ с низким кодом c. Настоящие эксперты Delphi знают Delphi. Настоящие эксперты по этим другим платформам должны знать гораздо больше. Сообщество Delphi, которое может быть не таким большим, как сообщество C # или C ++, обширно по сравнению с любым из этих подходов с низким кодом. Наконец, что важно, стоимость RAD Studio составляет лишь часть стоимости любого другого решения с низким уровнем кода.
Поэтому в следующий раз, когда кто-то попросит вас объяснить, почему вы любите RAD Studio и Delphi, просто скажите им: это похоже на решение с низким кодом, но намного лучше!
Ruth Malan re-stated Conway’s law like this: “If the architecture of the system and the architecture of the organization are at odds, the architecture of the organization wins”. As a technical leader, that really caught my attention. The sphere of things I should be able to influence in order to do my job well just grew!
I read that quote in the book “Team Topologies” by Matthew Skelton and Manuel Pais. I think it’s a really interesting book and introduces several ideas that were new to me at least. They deserve a wide audience within the software industry. In this article I’ll give you some highlights, and hopefully tempt you to read the book for yourself.
This first idea perhaps isn’t so novel, but it is the basis for the rest, so worth mentioning first. You should use the team as the fundamental building block for your organization. Rather than thinking about hierarchies or departments that get re-organized from time to time, think about building close-knit, long-lived teams. Adopt a team structure that is aligned to the business and adapts and evolves over time.
Conway’s law
When designing a team structure for your organization you should take into account Conway’s law. Your team structure and your software architecture are strongly linked. You should design team responsibilities and communication pathways. Skelton and Pais suggest there are four fundamental team topologies, ie kinds of team. The main one is ‘business stream-aligned’, the other three have supporting roles. The way they describe a stream-aligned team seems familiar to me. It’s like a DevOps team or a feature team. I think the name is better though. It’s a team that’s aligned to a single, valuable stream of work. They do everything needed to deliver valuable software to customers, gather feedback from how it’s used, and improve the way they work.
The other three kinds of team exist to reduce the burden on those stream-aligned teams. Skelton and Pais introduce the idea that good organization design requires restricting the amount of ‘cognitive load’ each team is expected to handle. Software systems can become large and complex and can require an overwhelming amount of detailed knowledge to work on them. This really resonated with me. I’ve worked on teams where the amount of software we had to look after was just too much to cope with. Important work was delayed or missed because we were overburdened.
The other three types of team are:
Platform
Complicated Subsystem
Enabling
There is a fuller description of each in the book of course, but to summarize – platform teams provide a self-service tool, API or software service. Enabling teams support other teams with specialized knowledge in particular techniques and business domains. Complicated subsystem teams look after a particular component of the software that needs specialized knowledge, like an optimizer or a machine learning algorithm.
Following from the idea of Conway’s law in particular, is the idea that you should have only three ‘interaction modes’ between teams. Restrict the communication pathways to get the architecture you want, and to avoid unnecessary team cognitive load. Skelton and Pais suggest teams should work together in one of three ways:
Collaborate
Facilitate
Provide X-as-a-Service
Collaboration means two teams work closely together, have a common goal, and only need to work this closely for a limited time. (Otherwise the teams would merge!) Facilitation is about one team clearing impediments that are holding back the other team. X-as-a-service is a much looser collaboration, where one team provides something like a software library or api to another team. The way teams interact with one another will evolve over time, and consequently your organization will also evolve.
I thought it was a good sign that I could imagine how my own work would fit into an organization designed this way. I think I would fit well into an Enabling team that would support stream-aligned teams through facilitation. We would work with several teams over time. My particular role is to facilitate a team to clear impediments around technical debt and code quality, and learn skills like Test-Driven Development.
Team Topologies really does make organizational design feel like you’re doing architecture. Skelton and Pais have pretty diagrams throughout their book with colours and patterns and lines and boxes describing various organizational designs. It’s all very attractive to a software developer like me. I think the intended audience is managers though. People who are designing organizations today. I really hope some of them read this book and are inspired to involve technical people in important organizational decisions.
Während Teil 1 unseres Webinars letzte Woche gab es einige Fragen zur Installation von Komponenten und zu den ersten Schritten. Ich habe ein kurzes Video gemacht und wollte hier einige Details liefern.
SynEdit is an optional library that provides syntax highlighting and proper indention behaviors if you want to allow users to edit Python code in your application. If you just want to interact with Python and Python libraries then you don’t need SynEdit. It is an open-source VCL only component set available via GetIt or on GitHub. Installing it via GetIt is the easiest.
Python4Delphiist die Bibliothek, die die Integration zwischen Python und Delphi ermöglicht. Es ist effektiv eine bidirektionale Brücke, die es Delphi ermöglicht, Python-Code auszuführen und Python-Bibliotheken aufzurufen, und Python ermöglicht es, in Delphi geschriebene Module aufzurufen, um auf andere Weise mit Delphi-Code, Objekten, Schnittstellen, Datensätzen usw. zu interagieren. Sie können beispielsweise die VCL umbrechen Python und verwenden Sie es, um eine Anwendungs-GUI zu erstellen. Dies ist eineWiki-Seite zur Installation.
Python bietet Bibliotheken und Interpreter. Sie benötigen die richtige Version für die Plattform, auf die Sie abzielen (Win, macOS, Linux usw.), und stellen sicher, dass die Bitness (32 vs 64-Bit) mit Ihrem Programm übereinstimmt. Sie können unter Windows sowohl 32- als auch 64-Bit-Versionen nebeneinander installieren. Python bietet eineeinbettbare Version, die nur eine minimale Installation darstellt und die Sie problemlos in Ihr Programm aufnehmen können. Weitere Informationen zur Verwendung bestimmter Python-Versionen finden Sie im P4D-Wiki .
A brand new lesson dedicated solely for reviewing has finally been added to all levels of our Essential Korean Courses!
The main ideas of all 30 lessons are squeezed into one compact lesson to make your run-of-the-mill review sessions more efficient and enjoyable.
One third of it is available to our Basic members, while the Premium users can benefit from it entirely.
Test yourself quick and easy under the Dialogue tab! Listen to the audio conversation that covers everything you’ve learned in your particular level, and reinforce your understanding with the provided Korean script and its English translation. Study wherever, and whenever, using the complimentary MP3 and PDF files.
Available for both Basic & Premium Users.
Listen to the sample audio track below!
Save time by reviewing the key vocabulary words in one go! Check your memory first by reading the vocabulary list, and then listen to the provided audio track – led by Hyunwoo & Kyeong-eun – to mark your answers. You’ll also be given the lists in both Korean and English.
Available for Premium Users only.
Listen to the sample audio track below!
Review major grammar points by clicking on the Grammar tab! Hyunwoo & Kyeong-eun will go over the condensed summary of each level in the given audio tracks, where you’ll have to answer questions to test your understanding.
Во время части 1 нашего вебинара на прошлой неделе было несколько вопросов об установке компонентов и начале работы. Я сделал короткое видео и хотел рассказать здесь некоторые подробности.
SynEdit — это дополнительная библиотека, которая обеспечивает выделение синтаксиса и правильное поведение отступов, если вы хотите разрешить пользователям редактировать код Python в вашем приложении. Если вы просто хотите взаимодействовать с библиотеками Python и Python, вам не нужен SynEdit. Это набор компонентов VCL с открытым исходным кодом, доступный через GetIt или на GitHub. Проще всего установить через GetIt.
Python4Delphi— это библиотека, которая обеспечивает интеграцию между Python и Delphi. По сути, это двунаправленный мост, который позволяет Delphi выполнять код Python и вызывать библиотеки Python, а также позволяет Python вызывать модули, написанные на Delphi, иным образом взаимодействовать с кодом Delphi, объектами, интерфейсами, записями и т. Д. Например, вы можете обернуть VCL из Python и используйте его для создания графического интерфейса приложения. Это вики-страница, посвященная установке.
Pythonпредоставляет библиотеки и интерпретаторы. Вам понадобится версия, подходящая для целевой платформы (Win, macOS, Linux и т. Д.), И убедитесь, что разрядность (32-разрядная или 64-разрядная) соответствует вашей программе. В Windows можно установить одновременно 32-разрядную и 64-разрядную версии. Python предоставляет встраиваемую версию, которая требует минимальной установки, и вы можете легко включить ее в свою программу. Для получения дополнительной информации об использовании определенных версий Python см. P4D Wiki.
Durante la parte 1 de nuestro seminario web la semana pasada, hubo algunas preguntas sobre cómo instalar componentes y comenzar. Hice un video corto y quería brindar algunos detalles aquí.
SynEdit es una biblioteca opcional que proporciona resaltado de sintaxis y comportamientos de sangría adecuados si desea permitir que los usuarios editen el código Python en su aplicación. Si solo desea interactuar con las bibliotecas de Python y Python, entonces no necesita SynEdit. Es un conjunto de componentes de VCL de código abierto disponible a través de GetIt o en GitHub. Instalarlo a través de GetIt es lo más fácil.
Python4Delphies la biblioteca que proporciona la integración entre Python y Delphi. Es efectivamente un puente bidireccional que permite a Delphi ejecutar código Python y llamar a bibliotecas de Python y permite que Python llame a módulos escritos en Delphi para interactuar de otra manera con código Delphi, objetos, interfaces, registros, etc. Por ejemplo, podría envolver el VCL desde Python y utilícelo para crear una aplicación GUI. Es una página wiki que cubre la instalación.
Pythonproporciona bibliotecas e intérpretes. Necesitará la versión correcta para la plataforma a la que se dirige (Win, macOS, Linux, etc.) y asegúrese de que el bitness (32 frente a 64 bits) coincida con su programa. Puede instalar versiones de 32 y 64 bits una al lado de la otra en Windows. Python proporciona una versión integrable que es una instalación mínima y puede incluirla fácilmente con su programa. Para obtener más información sobre el uso de versiones específicas de Python, consulte P4D Wiki.
Durante a parte 1 do nosso webinar na semana passada, houve algumas perguntas sobre a instalação de componentes e primeiros passos. Fiz um pequeno vídeo e queria dar alguns detalhes aqui.
SynEdit é uma biblioteca opcional que fornece realce de sintaxe e comportamentos de recuo adequados se você deseja permitir que os usuários editem o código Python em seu aplicativo. Se você deseja apenas interagir com as bibliotecas Python e Python, não precisa do SynEdit. É um conjunto de componentes VCL de código aberto disponível via GetIt ou no GitHub. Instalá-lo via GetIt é o mais fácil.
Python4Delphié a biblioteca que fornece a integração entre Python e Delphi. É efetivamente uma ponte bidirecional que permite ao Delphi executar código Python e chamar bibliotecas Python e permite ao Python chamar módulos escritos em Delphi de outra forma interagir com código Delphi, objetos, interfaces, registros, etc. Por exemplo, você pode envolver o VCL de Python e use-o para criar um aplicativo GUI. A é uma página wiki que cobre a instalação.
Pythonfornece bibliotecas e interpretadores. Você precisará da versão certa para a plataforma de destino (Win, macOS, Linux, etc.) e certifique-se de que a quantidade de bits (32 x 64 bits) corresponda ao seu programa. Você pode instalar as versões de 32 e 64 bits lado a lado no Windows. Python fornece uma versão embutida que é uma instalação mínima e você pode facilmente incluí-la em seu programa. Para obter mais informações sobre o uso de versões específicas do Python, consulte o Wiki do P4D.
Met elke versie van InterBase introduceren we nieuwe functies die de database-ervaring gemakkelijker maken voor onze gebruikers. Toen InterBase 2020 werd uitgebracht, hebben we verschillende verbeteringen toegevoegd en een geweldige nieuwe functie genaamd Tablespaces.
Mit jeder Version von InterBase führen wir neue Funktionen ein, die unseren Benutzern das Datenbankerlebnis erleichtern. Als InterBase 2020 veröffentlicht wurde, haben wir einige Verbesserungen und eine großartige neue Funktion namens Tablespaces hinzugefügt.
Was ist ein InterBase-Tablespace?
Ein Tablespace ist eine Art Speicherort, der für Datenbankobjekte bestimmt ist. Sie können Datendateien in einem bestimmten Speicherplatz (Speicherort) zusammenfassen, den Sie auswählen. Tablespaces ermöglichen eine bessere Leistung der Datenbank und die Optimierung der Serverhardware, indem Entwickler und Administratoren mehr Kontrolle über das Festplattenlayout haben.
Ein paar Dinge, die Sie über Tablespaces wissen sollten
Die Seitengröße Ihrer Datenbank entspricht der Seitengröße Ihres Tablespace.
Die maximale Größe von IB-Datenbanken kann mithilfe von primären und 254 sekundären Tabellenbereichen von 32 TB auf 8160 TB erhöht werden
Sie können Tablespaces verwenden, um die Leistung Ihrer Laufzeitdatenbank zu optimieren.
Ihre Hauptdatenbankdatei (en) ist / sind immer der primäre Tabellenbereich
Einrichten Ihrer Tablespaces
IBConsole
Erstellen Sie Ihren Tablespace – geben Sie ihm einen Namen und einen Dateispeicherort
Tabellen dem Tabellenbereich zuweisen – Öffnen Sie die Tabelle oder den Index und ändern Sie den Speicherort des Tabellenbereichs
Überprüfen Sie, ob die Tabellen und Indizes in Ihren Tablespace-Eigenschaften aufgeführt sind
Schauen Sie sich das Video zum Einrichten eines Tablespace für eine Tabelle und einen Index in IBConsole an
Hinweis: Sie können dem Tablespace die Dateierweiterung Ihrer Wahl geben oder gar keine verwenden.
2. Weisen Sie Ihrem Tablespace Tabellen und Indizes zu:
ALTER TABLE <Tabellenname> [ALTER TABLESPACE {<Tabellenbereichsname>}]
ALTER INDEX <Indexname> <Spaltenliste> [ALTER TABLESPACE {<Tabellenbereichsname>}]
3. Doppelklicken Sie auf den in IBConsole erstellten Tabellenbereich und stellen Sie sicher, dass die von Ihnen hinzugefügten Tabellen und Indizes vorhanden sind oder ISQL verwenden:
TABELLEN IN TABELLENPLATZ ANZEIGEN [<Tabellenbereichsname>]
INDIZES IN TABLESPACE ANZEIGEN [<Tabellenbereichsname>]
Möchten Sie mehr über Tablespaces in InterBase 2020 erfahren?
Kürzlich haben wir ein neues Experiment gestartet, um einige interne Projekte für MVPs zu öffnen, an denen sie arbeiten können. Im Gegensatz zu einigen unserer Open-Source-Initiativen wie Bold gehören diese weiterhin Embarcadero und sind ein Hauptbestandteil des Produkts. Dies ist etwas, worum MVPs gebeten haben, sich für eine Weile zu beteiligen. Es ist also sehr aufregend, endlich zu sagen, dass es in vollem Gange und ein Erfolg ist.
Das erste experimentelle Projekt war der XML-Mapper , und seit einigen Tagen ist die erste Version des aktualisierten XML-Mappers über GetIt verfügbar . Die Hauptziele dieser ersten Version waren das Aktualisieren der Benutzeroberfläche, das Beheben vorhandener Fehler und das Verbessern der Einhaltung von XML-Schemastandards. Das Team hat jedoch einige große Pläne für die Zukunft von XML Mapper, sodass Sie viele weitere Updates sehen werden.
Wenn Sie sich den alten XML-Mapper ansehen, bevor Sie den neuen installieren, werden Sie sehen, wie weit er gekommen ist.
Original vs. aktualisierte Benutzeroberfläche
Genau wie RAD Studio unterstützt der XML-Mapper sowohl das helle als auch das dunkle Thema und synchronisiert die Themenänderungen automatisch mit RAD Studio (und letztendlich sogar dem Betriebssystem).
Helle vs. dunkle Benutzeroberfläche
Wenn Sie nach der Installation des neuen XML-Mappers aus nostalgischen Gründen auf den alten XML-Mapper zugreifen möchten oder ihn deinstallieren müssen, wird der alte XML-Mapper gesichert, damit er wiederhergestellt werden kann. Sie finden es in dem C:UsersJimDocumentsEmbarcaderoStudio21.0CatalogRepositoryXMLMapper-27Ordner, der in umbenannt wurde XmlMapper_old.exe.
Zuletzt möchte ich allen MVPs, die am Testen und Aktualisieren des XML Mapper-Projekts beteiligt waren, einen Gruß aussprechen, insbesondere dem Kernteam, das an diesem Projekt festgehalten und es bis zur erfolgreichen Veröffentlichung durchgearbeitet hat und arbeitet hart auf zukünftige Funktionen und Updates.
Geben Sie [ALT] + TEAM auf dem Info-Bildschirm ein
Ich bat jedes der Mitglieder des Kernteams, einige Gedanken über ihre Beteiligung zu teilen.
Roger Swann
Ich war vorsichtig, mich dem XML MVP-Projekt anzuschließen: Als C ++ – Typ zu arbeiten und an diesem Delphi-basierten System zu arbeiten, machte mich vorsichtig. Es hat sich herausgestellt, dass ich der Meinung bin, dass meine Rolle bei der Codeüberprüfung einen kleinen, aber nützlichen Beitrag geleistet hat, und es ist sehr interessant, mit einer Gruppe von erfahrenen Programmierern mit unterschiedlichem Hintergrund (sowohl professionell als auch geografisch) zusammenzuarbeiten, die alle „Teamplayer“ sind. .
Roger Swann , Embarcadero C ++ MVP
Glenn Dufke
u unterstützen. Als Embarcadero ankündigte, dass MVPs zu bestimmten Projekten beitragen könnten, die Teil der IDE-Installation sind, trat ich sofort bei, um mit XML Mapper zu helfen und es noch besser zu machen. Die Erfahrung in der Zusammenarbeit hat auch ein großartiges Team talentierter Entwickler hervorgebracht, in dem wir weltweit auf Erfahrungen zurückgreifen können, und hat dazu geführt, dass XML Mapper dank des Teams enorme Fortschritte gemacht hat. Ich werde definitiv in Zukunft an weiteren Projekten teilnehmen, sobald sie verfügbar sind.
Glenn Dufke , Embarcadero MVP
Olaf Monien
Als Embarcadero ankündigte, bestimmte Projekte für die MVP-Zusammenarbeit zu öffnen, wusste ich sofort, dass dies eine großartige Gelegenheit sein würde, Code aktiv zu reparieren und zu entwickeln und Teil eines breiten Teams zu werden. Der XML-Mapper wird auch in Bezug auf seine Funktionen irgendwie unterschätzt, sodass er verlockend war, seinen Ruf zu verbessern. Während meiner Arbeit mit dem Projekt, bei dem ich als Teamleiter belohnt wurde, hatte ich die Gelegenheit, einige andere sehr talentierte MVPs zu treffen, mit denen ich zuvor kaum Kontakt hatte. Wir alle haben als Team einige wertvolle Techniken gelernt, was eine großartige Erfahrung war (und immer noch ist)!
Olaf Monien , MVP-Regionalkoordinator und XML Mapper-Teamleiter
Jason Chapman
Ich wollte nur helfen und Teil des ersten Projekts Embarcadero sein, das eng mit einer Gruppe von MVPs an einem Closed-Source-Teil der IDE zusammenarbeitet. Ich kann sehen, dass es eine erstaunliche Fahrtrichtung ist, dh die helle und enthusiastische Community dazu zu bringen, zum eigentlichen Produkt beizutragen, was ein großer Schritt nach dem Testen und Berichten von Problemen ist. Ich nahm an den meisten wöchentlichen Stand-up-Meetings teil, sah mir Code an und fand einen obskuren Fehler beim Wechseln von Monitoren mit unterschiedlichen DPIs. Der Ehrgeiz des Teams in Bezug auf den XML-Mapper ist großartig und ich hoffe, wir können weiter daran und an anderen Add-Ons / Teilen des Produkts arbeiten. Ich denke, wir haben damit begonnen, eine Vorlage für zukünftige Kooperationen zu erstellen, während wir über eine vollständige Liste von Aufgaben verfügen, an denen wir weiterarbeiten können.
Jason Chapman , regionaler Koordinator von Embarcadero MVP
Miguel Angel Moreno
Vor einigen Jahren habe ich XML Mapper als Schlüsselwerkzeug in einigen XML-basierten Projekten verwendet. Die Technologie sah beeindruckend aus, aber ich hatte das Gefühl, dass ich nicht die gesamte Leistung dieser Tools genutzt habe. Nachdem der XML-Mapper-Code MVPs zur Verfügung gestellt wurde, freue ich mich sehr, seine vollen Funktionen und Merkmale zu entdecken Helfen Sie den Entwicklern von Delphi und C ++ Builder, die Leistungsfähigkeit dieses Tools zu verstehen und zu erkunden. In der heutigen Zeit, in der die elektronische Rechnungsstellung und Buchhaltung langsam, aber stetig die Welt erobert, gab es keinen besseren Zeitpunkt, um die Funktionen zu nutzen dass XML Mapper eingebaut hat …
Miguel Angel Moreno , Embarcadero MVP
Ricardo Boaro
Die Möglichkeit, am XML Mapper-Projekt zu arbeiten, ist eine große Ehre. Ich denke, es ist ein großer Anreiz für einen MVP, Teil eines solchen Projekts zu sein. Wir teilen Ideen, lernen voneinander und wer gewinnt, ist die Entwicklergemeinde mit einem Produkt mit Verbesserungen und neuen Funktionen. Vielen Dank an alle im Team für die Partnerschaft. Heute kann ich sagen, dass ich neue Freunde habe. Vielen Dank an Embarcadero für diese Erfahrung.
Ricardo Boaro , Embarcadero MVP
Aktualisieren Sie also Ihren XML Mapper und halten Sie Ausschau nach weiteren XML Mapper-Updates und anderen MVP Project-Versionen!
В каждой версии InterBase мы вводим новые функции, которые упрощают работу с базой данных для наших пользователей. Когда был выпущен InterBase 2020, мы добавили несколько улучшений и новую замечательную функцию под названием Tablespaces.
Что такое табличное пространство InterBase?
Табличное пространство — это тип места хранения, предназначенный для объектов базы данных. Это позволяет вам группировать файлы данных в определенном месте хранения (месте), которое вы выбираете. Табличные пространства позволяют повысить производительность базы данных и оптимизировать оборудование сервера, позволяя разработчикам и администраторам иметь больший контроль над компоновкой диска.
Несколько вещей, которые вы должны знать о табличных пространствах
Размер страницы вашей базы данных такой же, как и размер вашей табличной области.
Максимальный размер баз данных IB может увеличиться с 32 ТБ до 8160 ТБ с использованием первичных и 254 вторичных табличных пространств.
Вы можете использовать табличные пространства для оптимизации производительности вашей базы данных во время выполнения.
Ваш основной файл (ы) базы данных всегда является основным табличным пространством
Настройка ваших табличных пространств
IBConsole
Создайте свое табличное пространство — дайте ему имя и расположение файла
Назначьте таблицы табличному пространству — откройте таблицу или индекс и измените расположение табличного пространства.
Убедитесь, что таблицы и индексы перечислены в свойствах вашего табличного пространства.
Посмотрите видео о настройке табличного пространства для таблицы и индекса в IBConsole.
Командная строка и ISQL
1. CREATE TABLESPACE <имя табличного пространства> FILE <‘Path / To / File / Location’>
Примечание: вы можете дать табличному пространству расширение файла по вашему выбору или вообще не использовать его.
2. Назначьте таблицы и индексы вашему табличному пространству:
ALTER TABLE <имя_таблицы> [ALTER TABLESPACE {<имя_таблицы>}]
ALTER INDEX <имя_индекса> <список столбцов> [ALTER TABLESPACE {<имя_таблицы>}]
3. Дважды щелкните табличное пространство, созданное вами в IBConsole, и убедитесь, что добавленные вами таблицы и индексы присутствуют или используют ISQL:
ПОКАЗАТЬ ТАБЛИЦЫ В TABLESPACE [<tablespace_name>]
ПОКАЗАТЬ ИНДЕКСЫ В ТАБЛИЦЕ [<tablespace_name>]
Хотите узнать больше о табличных пространствах в InterBase 2020?
A cada versão do InterBase, introduzimos novos recursos que tornam a experiência do banco de dados mais fácil para nossos usuários. Quando o InterBase 2020 foi lançado, adicionamos várias melhorias e um ótimo novo recurso chamado Tablespaces.
O que é um Tablespace InterBase?
Um Tablespace é um tipo de local de armazenamento destinado a objetos de banco de dados. Ele permite que você agrupe arquivos de dados em um espaço de armazenamento específico (local) de sua escolha. Os espaços de tabela permitem melhor desempenho do banco de dados e otimização do hardware do servidor, permitindo que desenvolvedores e administradores tenham mais controle sobre o layout do disco.
Algumas coisas que você deve saber sobre Tablespaces
O tamanho da página do seu banco de dados é o mesmo tamanho da página do seu espaço de tabela.
O tamanho máximo dos bancos de dados IB pode aumentar de 32 TB para 8160 TB usando espaços de tabela primários e 254 secundários
Você pode usar espaços de tabela para otimizar o desempenho do banco de dados em tempo de execução.
Seu (s) arquivo (s) de banco de dados principal (s) é / são sempre o espaço de tabela principal
Configurando seus Tablespaces
IBConsole
Crie seu espaço de tabela – dê a ele um nome e localização de arquivo
Atribuir tabelas ao espaço de tabela – Abra a tabela ou índice e altere a localização do espaço de tabela
Verifique se as tabelas e índices estão listados nas propriedades do espaço de tabela
Confira o vídeo sobre como configurar um espaço de tabela em uma tabela e índice no IBConsole
Linha de Comando e ISQL
1. CREATE TABLESPACE <nome do espaço de tabela> FILE <‘Caminho / Para / Arquivo / Localização’>
Observação: você pode atribuir ao espaço de tabela a extensão de arquivo de sua escolha ou não usar nenhuma.
2. Atribua tabelas e índices ao seu espaço de tabela:
ALTER TABLE <table_name> [ALTER TABLESPACE {<tablespace_name>}]
ALTER INDEX <index_name> <lista de colunas> [ALTER TABLESPACE {<tablespace_name>}]
3. Clique duas vezes no espaço de tabela que você criou no IBConsole e certifique-se de que as tabelas e índices adicionados estão lá ou usando ISQL:
MOSTRAR TABELAS EM TABLESPACE [<tablespace_name>]
MOSTRAR ÍNDICES NA TABLESPACE [<tablespace_name>]
Quer saber mais sobre os tablespaces no InterBase 2020?
Con cada versión de InterBase, presentamos nuevas funciones que facilitan la experiencia de la base de datos para nuestros usuarios. Cuando se lanzó InterBase 2020, agregamos varias mejoras y una gran característica nueva llamada Tablespaces.
¿Qué es un espacio de tabla InterBase?
Un Tablespace es un tipo de ubicación de almacenamiento destinada a objetos de base de datos. Le permite agrupar archivos de datos en un espacio de almacenamiento específico (ubicación) que elija. Los espacios de tabla permiten un mejor rendimiento de la base de datos y la optimización del hardware del servidor al permitir que los desarrolladores y administradores tengan más control sobre el diseño del disco.
Algunas cosas que debe saber sobre los espacios de tabla
El tamaño de página de su base de datos es el mismo tamaño de página que su espacio de tabla.
El tamaño máximo de las bases de datos IB puede aumentar de 32 TB a 8160 TB utilizando espacios de tabla primarios y 254 secundarios.
Puede utilizar espacios de tabla para optimizar el rendimiento de su base de datos en tiempo de ejecución.
Los archivos de su base de datos principal son siempre el espacio de tabla principal
Configurar sus espacios de tabla
IBConsole
Cree su espacio de tabla: asígnele un nombre y una ubicación de archivo
Asignar tablas al espacio de tabla: abra la tabla o el índice y cambie la ubicación del espacio de tabla
Verifique que las tablas y los índices estén enumerados en las propiedades de su espacio de tabla
Vea el video sobre cómo configurar un espacio de tabla en una tabla e índice en IBConsole
Línea de comandos e ISQL
1. CREATE TABLESPACE <nombre del espacio de tabla> ARCHIVO <‘Ruta / A / Archivo / Ubicación’>
Nota: puede darle al espacio de tabla la extensión de archivo que elija o no usar ninguna.
2. Asigne tablas e índices a su espacio de tabla:
ALTER TABLE <nombre_tabla> [ALTER TABLESPACE {<nombre_espacio_tabla>}]
ALTER INDEX <index_name> <lista de columnas> [ALTER TABLESPACE {<tablespace_name>}]
3. Haga doble clic en el espacio de tabla que creó en IBConsole y asegúrese de que las tablas y los índices que agregó estén allí o utilicen ISQL:
MOSTRAR TABLAS EN TABLESPACE [<nombre_espacio_tabla>]
MOSTRAR ÍNDICES EN TABLESPACE [<nombre de espacio de tabla>]
¿Quiere obtener más información sobre los espacios de tabla en InterBase 2020?
Recientemente, lanzamos un nuevo experimento de abrir algunos proyectos internos para que los MVP trabajen. A diferencia de algunas de nuestras iniciativas de código abierto como Bold, estas todavía son propiedad de Embarcadero y son una parte principal del producto. Esto es algo en lo que los MVP han solicitado la opción de participar durante un tiempo, por lo que es muy emocionante decir finalmente que está en pleno apogeo y es un éxito.
El primer proyecto experimental fue XML Mapper y, desde hace unos días, la primera versión del XML Mapper actualizado está disponible a través de GetIt. Los principales objetivos de esta primera versión incluían actualizar la interfaz de usuario, corregir errores existentes y mejorar el cumplimiento de los estándares del esquema XML, pero el equipo tiene grandes planes para el futuro de XML Mapper, por lo que verá muchas más actualizaciones.
Si echa un vistazo al antiguo XML Mapper antes de instalar el nuevo, verá lo lejos que ha llegado.
Interfaz de usuario original frente a actualizada
Al igual que RAD Studio, XML Mapper admite tanto el tema claro como el oscuro, y sincroniza los cambios de tema con RAD Studio (y, en última instancia, incluso con el sistema operativo) automáticamente.
Interfaz de usuario clara contra oscura
Si después de instalar el nuevo XML Mapper desea acceder al antiguo XML Mapper por razones nostálgicas, o necesita desinstalarlo, se hace una copia de seguridad del antiguo XML Mapper y está disponible para restaurarlo. Puede encontrarlo en la carpeta C:\Users\Jim\Documents\Embarcadero\Studio\21.0\CatalogRepository\XMLMapper-27 renombrada como XmlMapper_old.exe.
Por último, quiero agradecer a todos los MVP que participaron en la prueba y actualización del proyecto XML Mapper, especialmente al equipo central que se ha mantenido fiel a este proyecto y lo ha llevado a cabo con éxito, y que están trabajando duro con las funciones y actualizaciones futuras.
Escriba [ALT] + EQUIPO en la pantalla de información.
Le pedí a cada uno de los miembros del equipo central que compartieran algunas ideas sobre su participación.
Roger Swann
Desconfiaba de unirme al proyecto XML MVP: ser un chico de C ++ y trabajar en este sistema basado en Delphi me hizo cauteloso. Resultó que siento que mi función de revisión de código ha hecho una contribución pequeña pero útil y es muy interesante trabajar con un grupo de programadores hábiles e informados de diferentes orígenes (tanto profesionales como geográficos), todos los cuales son “jugadores de equipo”. .
Roger Swann, MVP de Embarcadero C ++
Glenn Dufke
XML Mapper es una herramienta increíblemente valiosa y útil para los desarrolladores, necesitaba varias actualizaciones para corregir errores y admitir funciones de esquema XML más nuevas. Cuando Embarcadero anunció que los MVP podrían contribuir a proyectos específicos que son parte de la instalación del IDE, me uní instantáneamente para ayudar con XML Mapper y hacerlo aún mejor. La experiencia de colaboración también ha fomentado un gran equipo de desarrolladores talentosos donde podemos aprovechar las experiencias de los demás a escala global y ha dado como resultado que XML Mapper avanza enormemente, todo gracias al equipo. Definitivamente me uniré a más proyectos en el futuro una vez que estén disponibles.
Glenn Dufke, MVP de Embarcadero
Olaf Monien
Cuando Embarcadero anunció que abrirían ciertos proyectos para la colaboración de MVP, supe de inmediato que sería una gran oportunidad para arreglar y desarrollar código activamente y formar parte de un equipo amplio. XML Mapper también se subestima de alguna manera en términos de lo que puede hacer, por lo que fue tentador ayudar a mejorar su reputación. Mientras trabajaba con el proyecto, donde me recompensaron como líder de equipo, tuve la oportunidad de conocer a otros MVP muy talentosos, con los que tuve poco o ningún contacto anteriormente. Todos, como equipo, aprendimos varias técnicas valiosas, que fue (y sigue siendo) una gran experiencia.
Olaf Monien, coordinador regional de MVP y líder del equipo XML Mapper
Jason Chapman
Solo quería ayudar y ser parte del primer proyecto Embarcadero trabajando en estrecha colaboración con un grupo de MVP en una parte de código cerrado del IDE. Puedo ver que es una dirección de viaje asombrosa, es decir, lograr que la comunidad brillante y entusiasta pueda contribuir al producto real, lo cual es un gran paso desde las pruebas y los informes de problemas. Asistí a la mayoría de las reuniones semanales de stand-ups y miré algunos códigos y encontré un error oscuro al cambiar de monitores con diferentes DPI. La ambición del equipo con respecto al XML Mapper es grande y espero que podamos seguir trabajando en él y en otros complementos / partes del producto. Creo que hemos comenzado a construir una plantilla para futuras colaboraciones, mientras tenemos un tablero completo de tareas en las que seguir trabajando.
Jason Chapman, Coordinador Regional de MVP de Embarcadero
Miguel Angel Moreno
Hace algunos años utilicé XML Mapper como herramienta clave en algunos proyectos basados en XML. La tecnología se veía impresionante, pero sentí que no aproveché todo el poder que estas herramientas podían ofrecer … Ahora que el código XML Mapper se ha puesto a disposición de los MVP, estoy realmente emocionado de descubrir todas sus capacidades y características, y de ayudar a los desarrolladores de Delphi y C ++ Builder a comprender y explorar el poder que esta herramienta puede ofrecer, y en estos tiempos modernos donde la facturación electrónica y la contabilidad se están apoderando del mundo de manera lenta pero constante, no ha habido un mejor momento para aprovechar las características que XML Mapper tiene incorporado …
Miguel Angel Moreno, MVP de Embarcadero
Ricardo Boaro
Tener la oportunidad de trabajar en el proyecto XML Mapper es un gran honor, creo que es un gran incentivo como MVP ser parte de un proyecto como este. Compartimos ideas, aprendemos unos de otros, y quien gana es la comunidad de desarrolladores con un producto con mejoras y nuevas funciones. Gracias a todos en el equipo por la asociación, hoy puedo decir que tengo nuevos amigos. Gracias Embarcadero por brindarnos esta experiencia.
Ricardo Boaro, MVP de Embarcadero
¡Actualice su XML Mapper y esté atento a más actualizaciones de XML Mapper y otras versiones del proyecto MVP!
Recentemente, lançamos uma nova experiência de abrir alguns projetos internos para os MVPs trabalharem. Ao contrário de algumas de nossas iniciativas de código aberto como o Bold, eles ainda são propriedade da Embarcadero e são uma parte principal do produto. Isso é algo em que os MVPs solicitaram a opção de se envolver por um tempo, então é muito emocionante finalmente dizer que está em pleno andamento e um sucesso.
O primeiro projeto experimental foi o XML Mapper e, há alguns dias, a primeira versão do XML Mapper atualizado está disponível via GetIt. Os principais objetivos desta primeira versão incluíram atualizar a interface do usuário, corrigir bugs existentes e melhorar a conformidade com os padrões do esquema XML, mas a equipe tem alguns grandes planos para o futuro do XML Mapper, então você verá muitas mais atualizações.
Se você der uma olhada no antigo XML Mapper antes de instalar o novo, verá como ele avançou.
UI original vs. atualizada
Assim como o RAD Studio, o XML Mapper suporta o tema claro e escuro e sincroniza as alterações do tema com o RAD Studio (e, em última análise, até mesmo o sistema operacional) automaticamente.
IU claro vs. escuro
Se, depois de instalar o novo XML Mapper, você quiser acessar o antigo XML Mapper por motivos nostálgicos, ou precisar desinstalar, o antigo XML Mapper é copiado e fica disponível para restauração. Você pode encontrá-lo na pasta C:\Users\Jim\Documents\Embarcadero\Studio\21.0\CatalogRepository\XMLMapper-27 renomeada como XmlMapper_old.exe.
Por último, gostaria de agradecer a todos os MVPs que estiveram envolvidos no teste e atualização do projeto XML Mapper, especialmente a equipe principal que se manteve neste projeto e viu o lançamento bem-sucedido, e que estão trabalhando duro em recursos e atualizações futuras.
Digite [ALT] + TEAM na tela sobre.
Pedi a cada um dos membros da equipe principal para compartilhar algumas idéias sobre seu envolvimento.
Roger Swann
Eu estava desconfiado de entrar no projeto XML MVP: ser um cara do C ++ e trabalhar neste sistema baseado em Delphi me deixou cauteloso. Acontece que eu realmente sinto que minha função de revisão de código fez uma contribuição pequena, mas útil, e é muito interessante trabalhar com um grupo de programadores habilidosos e experientes de diferentes origens (profissionais e geográficos), todos eles “jogadores de equipe” .
Roger Swann, Embarcadero C ++ MVP
Glenn Dufke
XML Mapper é uma ferramenta incrivelmente valiosa e útil para desenvolvedores, ele precisava de várias atualizações para corrigir bugs e oferecer suporte a recursos de esquema XML mais recentes. Quando a Embarcadero anunciou que os MVPs poderiam contribuir para projetos específicos que fazem parte da instalação do IDE, eu imediatamente me juntei para ajudar com o XML Mapper e torná-lo ainda melhor. A experiência de colaboração também fomentou uma grande equipe de desenvolvedores talentosos onde podemos aproveitar as experiências uns dos outros em escala global e resultou em um grande avanço do XML Mapper, tudo graças à equipe. Com certeza irei me juntar a mais projetos no futuro, assim que estiverem disponíveis.
Glenn Dufke, Embarcadero MVP
Olaf Monien
Quando a Embarcadero anunciou que abriria certos projetos para colaboração MVP, eu soube imediatamente que seria uma grande oportunidade de corrigir e desenvolver ativamente o código, tornando-se parte de uma grande equipe. O XML Mapper também é subestimado em termos do que pode fazer, por isso era tentador ajudar a melhorar sua reputação. Enquanto trabalhava com o projeto, onde fui recompensado como líder de equipe, tive a chance de conhecer alguns outros MVPs muito talentosos, com os quais tive pouco ou nenhum contato anteriormente. Todos nós, como equipe, aprendemos várias técnicas valiosas, o que foi (e ainda é) uma ótima experiência!
Olaf Monien, Coordenador Regional MVP e Líder da Equipe de Mapeador XML
Jason Chapman
Eu só queria ajudar e fazer parte do primeiro projeto da Embarcadero trabalhando junto com um grupo de MVPs em uma parte de código fechado do IDE. Posso ver que é uma direção de viagem incrível, ou seja, fazer com que a comunidade brilhante e entusiasmada seja capaz de contribuir com o produto real, o que é um grande passo depois de testar e relatar problemas. Eu participei da maioria das reuniões semanais de stand-ups e olhei alguns códigos e encontrei um bug obscuro na troca de monitores com DPIs variados. A ambição da equipe em relação ao XML Mapper é grande e espero que possamos continuar a trabalhar nele e em outros add-ons / partes do produto. Acho que começamos a construir um modelo para colaborações futuras, embora tenhamos um quadro completo de tarefas para continuar trabalhando.
Jason Chapman, Coordenador Regional do Embarcadero MVP
Miguel Angel Moreno
Alguns anos atrás, usei XML Mapper como a ferramenta-chave em alguns projetos baseados em XML. A tecnologia parecia impressionante, mas eu senti que não tirei proveito de todo o poder que essas ferramentas podem oferecer … Agora que o código XML Mapper foi disponibilizado para MVPs, estou realmente animado para descobrir todos os seus recursos e capacidades, e para ajudar os desenvolvedores Delphi e C ++ Builder a entender e explorar o poder que esta ferramenta pode oferecer, e nos dias modernos, onde o faturamento eletrônico e a contabilidade estão lenta, mas constantemente, tomando conta do mundo, não houve melhor momento para aproveitar os recursos que o XML Mapper possui embutido…
Miguel Angel Moreno, Embarcadero MVP
Ricardo Boaro
Ter a oportunidade de trabalhar no projeto XML Mapper é uma grande honra, acho que é um grande incentivo como MVP fazer parte de um projeto como este. Compartilhamos ideias, aprendemos uns com os outros e quem ganha é a comunidade de desenvolvedores com um produto com melhorias e novos recursos. Obrigado a todos da equipe pela parceria, hoje posso dizer que tenho novos amigos. Obrigado Embarcadero por nos proporcionar esta experiência.
Ricardo Boaro, Embarcadero MVP
Portanto, atualize seu XML Mapper e fique atento a mais atualizações de XML Mapper e outros lançamentos de projeto MVP!
Недавно мы запустили новый эксперимент по открытию некоторых внутренних проектов для MVP. В отличие от некоторых наших инициатив с открытым исходным кодом, таких как Bold, они по-прежнему принадлежат Embarcadero и являются основной частью продукта. Это то, чем MVP просили принять участие в течение некоторого времени, поэтому очень интересно, наконец, сказать, что это идет полным ходом и успешно.
Первым экспериментальным проектом был XML Mapper, и несколько дней назад первый выпуск обновленного XML Mapper доступен через GetIt. Основные цели этого первого выпуска включали обновление пользовательского интерфейса, исправление существующих ошибок и улучшение соответствия стандартам схемы XML, но у команды есть большие планы на будущее XML Mapper, поэтому вы увидите еще много обновлений.
Если вы посмотрите на старый XML Mapper перед установкой нового, вы увидите, как далеко он продвинулся.
Оригинальный и обновленный интерфейс
Как и RAD Studio, XML Mapper поддерживает как светлую, так и темную тему, и он автоматически синхронизирует изменения темы с RAD Studio (и в конечном итоге даже с ОС).
Свет против темного интерфейса
Если после установки нового XML Mapper вы хотите получить доступ к старому XML Mapper по ностальгическим причинам или вам нужно удалить его, будет создана резервная копия старого XML Mapper, и он будет доступен для восстановления. Вы можете найти его в папке C:\Users\Jim\Documents\Embarcadero\Studio\21.0\CatalogRepository\XMLMapper-27, переименованной в XmlMapper_old.exe.
Наконец, я хочу поблагодарить всех MVP, которые участвовали в тестировании и обновлении проекта XML Mapper, особенно основную команду, которая придерживалась этого проекта и довела его до успешного выпуска, и кто работает жёстко о будущих функциях и обновлениях.
Введите [ALT] + КОМАНДА на экране информации.
Я попросил каждого из основных членов команды поделиться некоторыми мыслями об их участии.
Roger Swann
Я опасался присоединиться к проекту XML MVP: то, что я был парнем на C ++ и работая над этой системой на основе Delphi, заставляло меня осторожничать. Оказалось, что я чувствую, что моя роль по обзору кода внесла небольшой, но полезный вклад, и очень интересно работать с группой опытных знающих программистов с разным опытом (как профессиональным, так и географическим), все из которых являются «командными игроками». .
Роджер Суонн, Embarcadero C ++ MVP
Glenn Dufke
XML Mapper — невероятно ценный и полезный инструмент для разработчиков, ему потребовалось несколько обновлений для исправления ошибок и поддержки новых функций схемы XML. Когда Embarcadero объявил, что MVP могут внести свой вклад в конкретные проекты, которые являются частью установки IDE, я сразу же присоединился, чтобы помочь с XML Mapper и сделать его еще лучше. Опыт совместной работы также способствовал созданию отличной команды талантливых разработчиков, где мы можем использовать опыт друг друга в глобальном масштабе, и привел к огромному развитию XML Mapper, и все это благодаря команде. Я обязательно присоединюсь к другим проектам в будущем, когда они станут доступны.
Гленн Дафке, MVP Embarcadero
Olaf Monien
Когда Embarcadero объявил, что они откроют определенные проекты для сотрудничества MVP, я сразу понял, что это будет отличная возможность активно исправлять и развивать код и стать частью большой команды. XML Mapper также почему-то недооценивается с точки зрения того, что он может делать, поэтому было заманчиво помочь улучшить его репутацию. Во время работы над проектом, где я был вознагражден как руководитель группы, у меня была возможность встретить некоторых других очень талантливых MVP, с которыми я раньше практически не контактировал. Мы все, как команда, выучили несколько ценных техник, что было (и остается) отличным опытом!
Олаф Моньен, региональный координатор MVP и руководитель группы XML Mapper
Jason Chapman
Я просто хотел помочь и принять участие в первом проекте Embarcadero, тесно сотрудничая с группой MVP над частью среды IDE с закрытым исходным кодом. Я вижу, что это удивительное направление движения, то есть заставить яркое и увлеченное сообщество внести свой вклад в реальный продукт, что является отличным шагом вперед от тестирования и сообщения о проблемах. Я присутствовал на большинстве еженедельных встреч, просмотрел код и обнаружил непонятную ошибку при переключении мониторов с различным DPI. Амбиции команды в отношении XML Mapper велики, и я надеюсь, что мы сможем продолжить работу над ним и другими надстройками / частями продукта. Я думаю, что мы начали создавать шаблон для будущего сотрудничества, имея при этом полный набор задач, над которыми нужно продолжать работу.
Несколько лет назад я использовал XML Mapper в качестве ключевого инструмента в некоторых проектах на основе XML. Технология выглядела впечатляюще, но я чувствовал, что не воспользовался всей мощью, которую могли предложить эти инструменты … Теперь, когда код XML Mapper стал доступным для MVP, я действительно рад открыть для себя все его возможности и функции, а также помочь разработчикам Delphi и C ++ Builder понять и изучить возможности, которые может предложить этот инструмент, и в наши дни, когда электронное выставление счетов и бухгалтерский учет медленно, но неуклонно захватывают мир, сейчас самое лучшее время для использования этих функций этот XML Mapper имеет встроенный …
Мигель Анхель Морено, MVP Embarcadero
Ricardo Boaro
Возможность работать над проектом XML Mapper — большая честь, я думаю, что это отличный стимул для MVP стать частью такого проекта. Мы делимся идеями, учимся друг у друга, и кто побеждает, так это сообщество разработчиков с продуктом с улучшениями и новыми функциями. Спасибо всем в команде за партнерство, сегодня я могу сказать, что у меня появились новые друзья. Спасибо Embarcadero за предоставленный нам опыт.
Рикардо Боаро, MVP Embarcadero
Так что обновите свой XML Mapper и следите за новыми обновлениями XML Mapper и другими выпусками проекта MVP!
Immer wieder hören wir von Kunden und Interessenten, daß sie einen Download von unserer Webseite nicht durchführen können. Ausprägungen sind „Sitzung abgelaufen“ / „Session expired“ oder man kann sich mit einem bestehenden Account erst gar nicht anmelden.
In den meisten (11-und-neunzig Prozent aller Fälle) liegt das an einem AdBlocker (oder einem entsprechendem DNS-Resolver, der dazwischenfunkt; wie etwa „PiHole“) und dem Zulassen bzw dem Verhindern von Cookies.
Was ist das Problem?
Man versucht sich eine Trial oder eine Community Edition herunterzuladen. Auf den entsprechenden Webseiten (Trial Delphi, Trial C++Builder oder Trial RAD Studio bzw der Community Edition für Delphi oder C++Builder) gibt es zwei Möglichkeiten sich für die Version anzumelden.
Entweder erzeugt man hier gleich einen neuen Benutzeraccount (Auch EDN oder DN Account genannt / „[Embarcadero] Developer Account“)
Man loggt sich mit einem bereits vorhanden Account ein
….dann kann man sich einloggen
Das Downloadproblem kann gleichermaßen bei beiden Varianten entstehen (Neuanlagen eines Account oder über das Einloggen). Und man erhält nach dem Ausfüllen des Formulars den lapidaren Hinweis „Sitzung abgelaufen“ bzw „Session expired“
Was ist der Grund?
Wir setzen auf einige Hintergrundinformationen, die per Cookie gesetzt werden. Können nicht alle Informationen gespeichert werden, dann bekommt man den Hinweis der abgelaufenen Sitzung….. was etwas irreleitend ist.
Was ist die Lösung?
Diese drei Dinge gilt es zu beachten:
Erlauben Sie die Cookies, auf der Webseite:
Schalten Sie Ihren AdBlocker aus (temporär)
Das kann auf sehr unterschiedliche Weise passieren. Das ist abhängig vom eingesetztem AdBlocker
Schalten Sie einen manipulierenden DNS-Server aus; zB PiHole (temporär)
Auch das kann, je nach eingesetztem DNS-Resolver, unterschiedliche aussehen.
Für viele/etliche Leute ist es am einfachsten (unter Windows) einfach mal den Internet-Explorer (temporär) zu verwenden, da hier zumeist kein Werbeblocker oder sonstige „Sicherheitssoftware“ installiert ist.
Keynote Especial com Marco Cantù, Jim McKeeth e David Millington no Embarcadero Conference 2020
Estamos na reta final das inscrições para o Embarcadero Conference 2020 (saiba tudo a respeito aqui em meu último post)!
E aqui temos mais uma razão pela qual você não pode ficar de fora do evento deste ano.
A apresentação principal estará a cargo de Marco Cantù – Delphi Product Manager, David Millington – C++ Product Manager e Jim McKeeth – Chief Developer Advocate & Engineer, discutindo o Roadmap do produto, iniciativas e projetos para toda a comunidade!
Apenas para reforçar, sua entrada equivale ao valor de uma cesta básica – a qual será doada para aqueles que mais estão precisando neste momento. Venha enriquecer seu conhecimento e de sua equipe, e ainda faça o bem!
Unsere Neuorientierung auf Qualitätssicherung und Fehlerbehebungen für C++Builder war noch nie so deutlich wie in 10.4.1. Wir danken Ihnen für Ihre Geduld, die wir nicht für selbstverständlich halten. Wir waren niemals zuvor so motiviert, auf dem soliden Fundament von C++Builder weiter zu entwickeln und werden diesen Vorstoß in späteren Versionen im Laufe des Jahres fortsetzen.
Einige Highlights in diesem Release:
Der Win64-Debugger, der auf LLDB basiert, hat einige wichtige Qualitätsverbesserungen und neue Funktionen erhalten. Zum Beispiel hat er jetzt eine stark verbesserte Leistung für Anwendungen mit Hunderten von Threads; Verbesserungen bei der Behandlung von Exceptions, insbesondere von Betriebssystem-Exceptions; die Behandlung von Speicheränderungen in komplexen Variablen (z.B. wenn sich der Zeiger eines Zeigers auf ein Element ändert, wird dies in der IDE reflektiert); und viele andere Korrekturen in einer Vielzahl von Bereichen, sowie einen neuen Formatierer (Visualiser) für unique_ptr.
Der Win64-Linker (ilink64) bietet eine Reihe von Verbesserungen in der Speicherverwaltung, um Kunden zu unterstützen, die auf Speicherprobleme stoßen, insbesondere bei Debug-Builds.
Wichtige Qualitätsverbesserungen in der gesamten Toolchain, die von Midas über Exception-Behandlung über RTTI bis hin zur Stabilität reichen.
Unser Ziel ist es, C++Builder wieder zu einer stabilen und effizienten IDE zu machen. Sobald wir mit dem Fundament zufrieden sind, werden wir uns größeren und besseren Dingen zuwenden. Wir hoffen, die Code-Vervollständigung zu aktualisieren und den Win64-Linker im Laufe des nächsten Jahres vollständig zu ersetzen, was eine viel bessere Produktivität in der IDE ermöglichen und Sie bei der Verknüpfung großer Projekte unterstützen wird. Freuen Sie sich auf weitere Neuigkeiten, wenn 10.4.2 erscheinen wird.
Status der Visual-Assist-Integration in RAD Studio
Auf unserer Roadmap steht die Integration von Visual Assist in C++Builder. Für die erste Version konzentrieren wir uns zunächst auf die Top-Features, wie Code-Vervollständigung, Referenzsuche, Navigation und Refactorings. Dies ist bereits in Arbeit. Der C++-Parser von Visual Assist erkennt momentan unsere C++-Erweiterungen (Eigenschaften, Schließungen usw.), und wir prüfen verschiedene Ansätze zur IDE-Integration. Um mehr über Visual Assist zu erfahren, werfen Sie einen Blick auf https://www.wholetomato.com/features. Probieren Sie Visual Assist aus, und wenn es Funktionen gibt, die wir in C++Builder hinzufügen sollten, senden Sie uns eine Funktionsanfrage.
C++-Bibliotheken
Unsere Arbeit an der Steigerung der C++Builder-Kompatibilität ist im Gange, und wir sehen sehr gute Ergebnisse. Sie erinnern sich vielleicht an einen früheren Blog-Beitrag, dass wir gängige Open-Source-C++-Bibliotheken aufnehmen und sicherstellen, dass sie mit C++Builder funktionieren. (Mehrere neue werden demnächst auf GetIt erscheinen.) Das bedeutet nicht nur, dass Ihnen auf diese Weise nützliche gängige Bibliotheken leichter zur Verfügung stehen, sondern auch, dass Sie mit größerer Wahrscheinlichkeit jede C++-Bibliothek, die Sie verwenden möchten, leichter abrufen können.
Diese Bemühungen haben bereits Früchte getragen: Wir haben nicht nur etliche Bibliotheken in GetIt und es werden ständig mehr, sondern die zu leistende Arbeit zur Verwendung einer Bibliothek in C++Builder hat sich geändert. Heutzutage ist es normalerweise unkompliziert, Code für MSVC oder GCC so durch Makros (ifdef-s) zu erweitern, um auch Embarcadero spezifischen Code zu verpacken. Die überwiegende Mehrheit der RTL oder anderen Methoden existieren und die Bibliotheken können direkt verwendet werden. Häufig wird eine Bibliothek sofort kompiliert. Wenn Sie eine Bibliothek haben, an der Sie interessiert sind, empfehlen wir, sie mit C++Builder 10.4.1 auszuprobieren: Möglicherweise müssen kleine Änderungen vorgenommen werden, aber die Gesamtkompatibilität sollte erheblich verbessert sein.
Desktop UX Summit
In den letzten zehn Jahren hat sich das Anwendungsdesign stark auf mobile oder Webanwendungen konzentriert. Das Webdesign hat dabei das Design anderer Anwendungen stark beeinflusst – oft zum Nachteil. Eine Desktop- oder mobile Anwendung ist keine Website.
In diesem Jahr fand zum ersten Mal der Desktop UX Summit statt – eine kostenlose Online-Konferenz zum Thema Desktop-Anwendungsdesign, an der eine Vielzahl von Referenten teilgenommen haben, die häufig nicht mit Embarcadero-Technologien verbunden sind oder diese nicht nutzen. Wir möchten das Bewusstsein für Desktop-Anwendungsdesign nicht nur unseren eigenen Kunden, sondern den Entwicklern im Allgemeinen vermitteln. Die Konferenz bietet einige großartige Sitzungen und ist kostenlos!
Neues Gratis-Tool: Dev C++
Im Zuge der Neubelebung in der Herstellung von Qualitätswerkzeugen für die C++-Entwicklung möchten wir Ihnen auch unseren neuesten Open Source-Text-Editor mit geringem Platzbedarf, Embarcadero Dev-C++, vorstellen:
Embarcadero Dev-C++ ist eine neue und verbesserte Variante von Bloodshed Dev-C++ und Orwell Dev-C++. Es ist eine voll funktionsfähige IDE und ein Code-Editor für die Programmiersprache C/C++. Es verwendet die MinGW-Portierung von GCC (GNU Compiler Collection) als Compiler. Embarcadero Dev-C++ kann auch in Kombination mit Cygwin oder jedem anderen GCC-basierten Compiler verwendet werden. Wir waren in der Lage, dieses Paket mit einem sehr geringen Speicherbedarf zu erstellen, da es eine native Windows-Anwendung ist und Electron nicht verwendet. Um das Ganze abzurunden, wurde die gesamte Arbeit zur Aktualisierung dieses Forks mit der neuesten Version von Embarcadero Delphi durchgeführt. Dieses und andere kostenlose Tools können unter https://www.embarcadero.com/free-tools/dev-cpp heruntergeladen werden.
C++-Nachrichten weltweit
Zum Schluss noch eine Zusammenfassung der neuesten C++-Nachrichten und Blog-Einträge!
MeetingC++, eine der führenden C++-Konferenzen, ist dieses Jahr online. Sie läuft in der mitteleuropäischen Zeitzone und kostet 49 € für Frühbucher.
Die Jahrestagung der LLVM (Clang, LLDB) meeting ist dieses Jahr ebenfalls online. Die Eintrittskarten sind kostenlos, Sie können jedoch auch eine bezahlte Fördererkarte kaufen.
‚Das Problem mit C‘: ein wirklich interessanter Beitrag von cor3ntin darüber, wie sich die Sprachen unterscheiden und was C-Kompatibilität für C++ bedeutet
David I hat einen tollen Blog-Eintrag geschrieben, der die Verwendung einiger Boost-Klassen mit C++Builder zeigt. (Eine aktuelle Version von Boost ist in GetIt.) Insbesondere zeigt er die Ringspeicherklasse. Boost ist voll von nützlichen Werkzeugen, und es ist großartig, einige von ihnen vorgestellt zu bekommen.
Adecc Systemshaus veröffentlicht einen C++ Blog. Dabei gibt es einige großartige Beiträge insbesondere über die Verwendung von Standard-C++-Streams, wie z.B. C++-Streams mit einem TListView.
Incredibuild, Hersteller eines hervorragenden Buildsystems zur Verteilung von C++-Builds auf verschiedene Maschinen, hat eine Umfrage zu Ihrer bevorzugten C++-IDE und zum Zeitpunkt der Erstellung Ihrer Anwendungen durchgeführt – Visual Studio, C++Builder und ‚Andere‘ waren zu jeweils etwa 30% beteiligt.
Und schließlich wurde C++20 fertiggestellt! Lesen Sie mehr in Herb Sutter’s Blog.
Wenn Ihr Unternehmen Software entwickelt, die Sie für bestimmte vertikale / horizontale Branchen verkaufen, sollten Sie dieses Webinar nicht verpassen!
In Abwechslung zu den normalen technischen Sitzungen wird Mary Kelly mit mir, Stephen Ball, zusammen mit mir die Welt der ISVs erkunden, ISV-Geschäftsmodelle diskutieren und erläutern, wie Unternehmen auf der ganzen Welt mit InterBase höhere Renditen erzielen. InterBase ist heute wirklich eine Datenbank, vor allem dank der nahezu Null-Verwaltung und der einfachen Installation.
Anhand von Beispielen aus der Finanz-, Medizin- und Freizeitbranche erfahren Sie, wie InterBase es Unternehmen ermöglicht, schneller zu innovieren, die Markteinführungszeit zu verkürzen, das Kundenerlebnis zu verbessern, Markttrends zu verfolgen und vor allem gleichzeitig davon zu profitieren .
15. Oktober – Wie ISVs dank InterBase die Innovation beschleunigen und gleichzeitig die Kosten senken
Если ваша компания занимается разработкой программного обеспечения, которое вы продаете для конкретных отраслевых вертикалей / горизонтальных секторов, то этот веб-семинар нельзя пропустить!
В отличие от обычных технических сессий, Мэри Келли присоединится ко мне, Стивену Боллу, когда мы исследуем мир ISV, обсудим бизнес-модели ISV и то, как компании во всем мире получают более высокую прибыль благодаря InterBase. На сегодняшний день InterBase в значительной степени является базой данных, особенно благодаря почти нулевому администрированию и простой установке.
Из реальных примеров из финансового, медицинского и развлекательного секторов вы узнаете, как InterBase позволяет компаниям быстрее внедрять инновации, сокращать время вывода на рынок, улучшать качество обслуживания клиентов, идти в ногу с рыночными тенденциями и, что важно, получать от них прибыль то же время.
15 октября — Как независимые поставщики программного обеспечения ускоряют внедрение инноваций при сокращении затрат благодаря InterBase
Se sua empresa está desenvolvendo software que você vende para setores verticais / horizontais específicos da indústria, então este webinar é imperdível!
Em uma mudança das sessões técnicas normais, Mary Kelly se juntará a mim, Stephen Ball, enquanto exploramos o mundo dos ISVs, discutimos os modelos de negócios dos ISVs e como as empresas em todo o mundo estão obtendo retornos maiores, graças ao InterBase. InterBase é um banco de dados para hoje, especialmente graças à sua administração quase nula e instalação simples.
Com estudos de caso do mundo real dos setores Financeiro, Médico e de Lazer, você aprenderá como o InterBase está permitindo que as empresas inovem mais rapidamente, reduzam o tempo de entrada no mercado, melhorem as experiências do cliente, acompanhem as tendências do mercado e, mais importante, lucrem com eles no mesmo tempo.
15 de outubro – Como os ISVs estão acelerando a inovação e reduzindo custos, graças ao InterBase
Horas locais 15 de outubro, 9h CDT (Austin) 15 de outubro, 10h EDT (Nova York) 15 de outubro, 15h BST (Londres) 15 de outubro, 16h CET (Berlim) 15 de outubro, 17h MSK (Moscou) 15 de outubro, 19h30 IST (Mumbai) 15 de outubro, 23h JST (Tóquio)
Si su empresa está desarrollando software que vende para sectores verticales / horizontales específicos de la industria, ¡este seminario web es uno que no debe perderse!
En un cambio de las sesiones técnicas normales, Mary Kelly se unirá a mí, Stephen Ball, mientras exploramos el mundo de los ISV, discutimos los modelos comerciales de los ISV y cómo las empresas de todo el mundo están obteniendo mayores retornos, gracias a InterBase. InterBase es en gran medida una base de datos para hoy, especialmente gracias a su administración casi nula y su sencilla instalación.
Con estudios de casos reales de los sectores financiero, médico y de ocio, aprenderá cómo InterBase permite a las empresas innovar más rápido, reducir el tiempo de comercialización, mejorar las experiencias de los clientes, mantenerse al día con las tendencias del mercado y, lo que es más importante, beneficiarse de ellas en el Mismo tiempo.
15 de octubre: cómo los ISV están acelerando la innovación al tiempo que reducen los costos, gracias a InterBase
Horarios locales 15 de octubre, 9 AM CDT (Austin) 15 de octubre, 10 AM EDT (Nueva York) 15 de octubre, 3 PM BST (Londres) 15 de octubre, 16:00 CET (Berlín) 15 de octubre, 5 PM MSK (Moscú) 15 de octubre, 7.30 PM IST (Mumbai) 15 de octubre, 11 PM JST (Tokio)
Venha para o Conference 2020 Virtual! Eu garanto que você vai se surpreender (mais uma vez)!
Eis que, após 7 anos consecutivos ajudando a produzir o Embarcadero Conference em seu modelo presencial, tivemos que esquecer tudo o que sabíamos e começar um planejamento do zero!
Não que não estejamos acostumados com eventos virtuais, webinars, etc., definitivamente não é este o caso. Mas produzir uma verdadeira conferência em um meio totalmente virtual é um desafio novo para todos nós.
Nós trabalhamos muito nos últimos meses, e os detalhes finais estão sendo ajustados. Eu garanto que você vai se surpreender com o que vai encontrar, a qualidade das palestras, e tudo mais!
Aqui vai um breve resumo do que você pode esperar:
48+ palestras de altíssimo nível técnico
Keynote especial com Marco Cantù, Jim McKeeth e David Millington
Plantão de dúvidas com nossos super MVPs
E tudo isso será 100% gravado e disponibilizado (somente) aos inscritos!
Para não ficar somente nas questões técnicas, a sua participação também vai ajudar alguém que está precisando muito. Cada inscrição tem o valor de uma cesta básica, e 100% da arrecadação será convertida e doada diretamente aos que mais estão necessitando neste momento.
Uma outra novidade, dirigida ao publico acadêmico: teremos uma trilha exclusiva para eles. Esta trilha tem uma entrada especial, e corresponde a doação de um Kit Higiene, o qual também será direcionado as comunidades que mais precisam.
Portanto, estou aqui para reforçar o convite: faça sua parte, atualize seu conhecimento técnico, e de toda sua equipe, e ainda pratique o bem!
Hello, and welcome to this series! 👋 I’m Daniel, a software engineer at RisingStack, and I’ll be your guiding hand to get to learn Dart and Flutter.
This series is aimed at those who know React-Native, JavaScript, or web development and are trying to get into cross-platform mobile development because I’ll be comparing Dart language examples to JavaScript ones, and Flutter with React and React-Native.
However, if you don’t know any of these technologies yet, don’t let that throw you off from this series - I’ll explain core concepts thoughtfully. Let’s get started!
Let's learn the Dart language as JS developers: We dive into OOP, classes, inheritance, and mixins, asynchrony, callbacks, async/await and streams.
Flutter and Dart are made by Google. While Dart is a programming language, Flutter is a UI toolkit that can compile to native Android and iOS code, has experimental web and desktop app support, and it’s the native framework for building apps for Google’s Fuchsia OS.
This means that you don’t need to worry about the platform, and you can focus on the product itself. The compiled app is always native code as Dart compiles to ARM, hence providing you the best cross-platform performance you can get right now with over 60 fps. Flutter also helps the fast development cycle with stateful hot reload, which we’ll make use of mostly in the last episode of this series.
By the end of this series, you’ll have a basic understanding of Dart, the basic data structures, object-oriented programming, and asynchrony with futures and streams.
In Flutter, you’ll take a look at widgets, theming, navigation, networking, routing, using third-party packages, native APIs, and a lot more. Then, in the last episode of this series, we’ll put it all together and build a full-blown minigame together! Seems exciting? Then keep reading!
This episode of the series focuses on the Dart part of this ecosystem. We’ll look into Flutter in the next episode, and then we’ll put it all together into a fun minigame in the last episode. I’m excited to see what you’ll all build with Flutter, so let’s jump right in!
Sidenote: throughout this series, I'll use the “👉” emoji to compare JS and Dart language examples. Typically, the left side will be the JS, and the right side will be the Dart equivalent, e.g. console.log("hi!"); 👉 print("hello!");
Dart vs JavaScript - the pros and cons
JavaScript and Dart cannot be directly compared as they both have different use cases and target audiences. However, they both have their own advantages and disadvantages, and after a few projects with both technologies, you’ll get to see where they perform well.
There are some things, however, that you’ll notice as you are getting into the Flutter ecosystem: Dart has a steeper learning curve with all those types, abstract concepts and OOP - but don’t let that throw you off your track.
JavaScript has a bigger community, and hence more questions on StackOverflow, more packages, resources, learning materials, and meetups.
But once you get the hang of Dart, you’ll notice that Dart and Flutter has much-much better developer tooling, it’s faster, and compared to pub.dev, (Dart’s package repository) npm has more packages with worse quality.
Variables and types in the Dart language
After the first glance at a Dart code snippet, you may notice a concept that you may be unfamiliar with if you only know JS. Dart is type safe.
It means that when you want to define a variable, you’ll either have to provide an initial value and let the compiler figure out what type matches it (implicit typing), or (and this is the optimal case) you’ll have to provide the type of the variable explicitly.
In programming, types define what kind of data you are trying to store in your variable - for example, with an int type, you’ll be able to store an integer number (e.g. 7). In Dart, the most commonly used primitive types are int, double, string and boolean. Here are some language examples:
// Heads up! This is some nasty Dart code!
var num = 0; // Dart will implicitly give this variable an int type. var, let 👉var
int myInt = 3; // this is an explicitly typed variable
final double pi = 3.14; // const 👉final, static and const, more info below
myInt = 3.2; // will throw an error as 3.2 is not an integer
pi = 3.2; // will throw an error as pi is marked with final
String name = "Mark";
There’s also a “fallback-type” or a non-typed type: dynamic. In Dart, the dynamic type can be used whenever the exact type of a parameter, argument, list item, or anything else cannot be determined while writing your code. Please always be extra careful when working with dynamically typed variables and add extra safety barriers to your code so that your app doesn’t crash when an unexpected type gets passed. Try to avoid using dynamic as much as possible.
Oh, and a quick tip: to play around with Dart, you can use DartPad. It’s an online Dart compiler, or a “playground” made by the Dart team.
A few words about final, static and const
In Dart, we can create constants with three keywords: final, static, and const. final can be only created once in the runtime, while const is created at compile-time. You can think of const as an even stricter final. (When in doubt, you can use final and you’ll be just fine. To read more about the keywords final, static, and const, check out this article on the official Dart blog.
To get to know more about variables and the built-in types in Dart, please refer to this short explanation.
Writing your first Dart language function
Type-safety will come up in a lot of places - for example, when writing functions, you’ll have to define the return type and the type of the arguments.
// return type, function name, parameters with their types and names
double addDoubles(double a, double b) {
return a + b;
}
addDoubles(3.2, 1.4); // => will return 4.6
And when your function doesn’t return anything, you can throw in the keyword void - just like the entry point of every Dart program, void main() does.
void main() {
print(addNumbers(2, 3)); // console.log() 👉print()
// this function does not return anything!
}
What’s an entry point anyways? In JavaScript, the code starts executing from the first line and goes linearly line-by-line until it reaches the end of the file. In Dart, you have to have a main() function that will serve as the body of your program. The compiler will start the execution with the main function, that’s where it enters your code - hence the name entry point.
Control flow statements - if, for, while, etc.
They look and work just like in JavaScript. Here are some examples:
int age = 20;
if(age >= 18) {
print("here’s some beer! 🍻");
} else {
print("🙅♂️sorry, no alcohol for you...");
}
// let’s count from 1 to 10!
// p.s.: notice the `int i`
for (int i = 1; i <= 10; i++) {
print("it’s number $i"); // string interpolation: ${} 👉 $ (for variable names)
}
// while loops:
// please don’t run this snippet, it will probably crash or run out of resources...
while("🍌" == "🍌") { // oh, and forget ===, you don’t need it in Dart!
print("Hey! 👋 I’m a banana!");
}
Arrays and objects
In JavaScript, to store multiple pieces of data together, we use arrays and objects. In Dart, we call them lists and maps, and they work a bit differently under the hood (and they have some extra APIs!). Let’s look into them!
Array 👉List
In Dart, a list ideally stores an array of homogenous data . That’s right -- no more [1, "banana", null, 3.44] (ideally)! You can create a list with the [] syntax you are already familiar with from JS, and with the new List() constructor.
// the usual, implicitly typed, [] syntax
var continents = ["Europe", "North America", "South America", "Africa", "Asia", "Australia"];
continents.add("Antarctica"); // .push() 👉 .add()
// please note that when throwing in multiple types of data, Dart will fall back to the `dynamic` type for your list:
var maybeBanana = [1, "banana", null, 3.44];
// the `new List()` syntax, with a dynamic length:
// note the List<T> syntax: you need to pass in the desired value type between the <>s
List<int> someNiceNumbers = new List();
someNiceNumbers.add(5);
// fixed-length list:
List<int> threeNiceNumbers = new List(3); // this list will be able to hold 3 items, at max.
// dynamic list with the new List() syntax:
List<dynamic> stuff = new List();
stuff.add(3);
stuff.add("apple"); // this is still totally legit because of the <dynamic> type
Now that we’ve covered arrays, we can move on to objects. In JavaScript, objects store key-value pairs, and the closest we can get to this data structure in Dart is a Map. Just like we saw at the List, we can define a Map both with the { ... } literal and with the new Map() constructor.
// the usual { ... } literal
var notesAboutDart = {
objects: "hey look ma! just like in JS!",
otherStuff: "idc we’ll look into them later"
};
// the new Map constructor
Map notesAboutJs = new Map();
// … and of course, you can explicitly type Maps!
// typed Map literal:
Map<String, int> prices = <String, int>{
"apple": 100,
"pear": 80,
"watermelon": 400
};
// typed Map constructor:
final Map<String, String> response = new Map<String, String>();
Knowing about these methods will be just enough for now - but if you want to get to know the advanced stuff like HashMaps right away, be sure to check out the API docs of the Map class.
Imports and exports
In JavaScript, you could simply expose values from your files with export or module.exports and refer to them in other files with import or require(...). In Dart, it’s both a bit more complex and simpler than that.
To simply import a library, you can use the import statement and refer to the core package name, a library name, or a path:
import 'dart:math'; // import math from “math” 👉import “math”;
// Importing libraries from external packages
import 'package:test/test.dart'; // import { test } from “test” 👉import “test/test”;
// Importing files
import 'path/to/my_other_file.dart'; // this one is basically the same
// Specifying a prefix
import 'dart:math' as greatMath;
But how about creating your own libraries or exporting stuff? Dart lacks the usual public, protected or private keywords that Java has for this purpose (sidenote: Dart is compared to Java a lot of times) and even the export keyword that we’re used to in JavaScript. Instead, every file is automatically a Dart library and that means that you can just write code without explicitly exporting stuff, import it in another file, and expect it to work out just fine.
If you don’t want Dart to expose your variable, you can (and should!) use the _ prefix. Here’s an example:
// /dev/a.dart
String coolDudes = "anyone reading this";
String _hiddenSuffix = “...with sunglasses on 😎";
// /dev/b.dart
import "./b.dart";
print("cool dudes: $coolDudes"); // => cool dudes: anyone reading this
print("cool dudes: $coolDudes $_hiddenSuffix") // => will fail as _hiddenSuffix is undefined in this context
Oh, and just a quick note about naming variables: camelCasing is considered a best practice, just like capitalizing abbreviations longer than two characters (e.g. HTTP => Http, or HttpConnectionInfo). To know more about writing efficient and stylish Dart code, make sure that you read the Effective Dart guide later on your journey, once you are confident with the basics.
A quick intro to OOP and classes
Dart is an object oriented language - but what does that mean for you?
If you don’t know OOP yet, that means that you’ll have to learn a brand new paradigm of programming that is utilized in many popular languages like Java, C#, and of course, Dart. While introducing you to OOP isn’t the main goal of this series, I’ll provide you a quick intro so that you can start off with Dart and Flutter.
The first thing to settle is that JavaScript isn’t either strictly OOP nor functional - it contains elements from both architectures.
It’s up to your preferences, the project you work on, and the desired target framework, to choose (if a strict decision is ever made) between the two concepts. On the other hand, Dart is pretty strict about being OOP.
Here’s a little chart I made to help you wrap your head around the main differences between functional and object-oriented programming:
To sum up: before OOP, there was procedural programming. There were a bunch of variables and functions lying around - and it was simple, but if often led to spaghetti code. To solve this, engineers came up with OOP, where we group related functions and variables into a unit. This unit is called an object, and inside it there are variables called properties and functions called methods. While creating this unit, always try to be descriptive. To practice making up these units, you can come up with real-world objects around you and try to describe them with properties and methods.
A car would, for example, have properties like their brand, color, weight, horse power, their license plate number and other stuff that can describe a car. Meanwhile it would have methods for acceleration, breaking, turning, etc.
Of course, you don’t have cars inside your code, so let’s put that abstract idea into code! A great example of a unit inside JS would be the window object. It has properties like the width and height of the window and has methods for resizing and scrolling.
The four principles of OOP are:
Encapsulation: Group variables (properties) and functions (methods) into units called objects.This reduces complexity and increases reusability.
Abstraction: You should not be able to directly modify the properties or access all methods - instead, think of writing a simple interface for your object. This helps you isolate the impact of changes made inside the objects.
Inheritance: Eliminate redundant code by inheriting stuff from another object or class. (Dart achieves this with mixins - we’ll look into concrete examples later). This helps you keep your code base smaller and more maintainable.
Polymorphism: Because of the inheritance, one thing can behave differently depending on the type of the referenced object. This helps you in refactoring and eliminating ugly ifs and switch/case statements.
Real-Life Dart Examples
If you are confused or intimidated by this concept, don’t worry. Looking at real-life Dart examples will help you wrap your head around this whole mess we call OOP. Let’s look at a simple class with some properties and a constructor.
class Developer {
final String name;
final int experienceYears;
// Constructor with some syntactic sugar
// a constructor creates a new instance of the class
Developer(this.name, this.experienceYears) {
// The code you write here will run when you construct a new instance of the Developer class
// e.g. with the Developer dev = new Developer(“Daniel”, 12); syntax!
// Notice that you don't have to explicitly type
// this.name = name;
// one by one. This is because of a Dart syntactic sugar
}
int get startYear =>
new DateTime.now().year - experienceYears; // read-only property
// Method
// notice the `void` as this returns nothing
void describe() {
print(
'The developer is $name. They have $experienceYears years of experience so they started development back in $startYear.');
if (startYear > 3) {
print('They have plenty of experience');
} else {
print('They still have a lot to learn');
}
}
}
And somewhere else in the code, you can construct a new instance of this class:
void main() {
Developer peter = new Developer("Peter", 12);
Developer aaron = Developer("Aaron", 2); // in Dart 2, the new keyword is optional
peter.describe();
// this well print this to the console:
// The developer is Peter. They have 12 years of experience so they started development back in 2008.
// They have plenty of experience.
aaron.describe();
// =>
// The developer is Aaron. They have 2 years of experience so they started development back in 2018.
// They still have a lot to learn.
}
And that’s it! You’ve just made your first Dart class with properties and methods. You used typed variables, get-only (protected) variables, control flow statements, got the current year and printed some stuff out to the console.
Congratulations! 🎉
Inheritance and mixins in Dart
Now while you have momentum, let’s have a peek at inheritance and mixins.
Once you have a solid knowledge of classes and start to think of more complex systems, you’ll feel the need for some way to inherit code from one class to another without copying and pasting code all over the place and making a big ol’ bowl of spaghetti. ❌🍝
For this reason, we have inheritance in OOP. When inheriting code from one class to another, you basically let the compiler copy and paste members of the class (“members” of the class are methods and properties inside a class), and add additional code on top of the previous class. This is where polymorphism kicks in: the same core code can exist in multiple ways by inheriting from a base class (the class you inherit from).
Think of HTML. There are several similar elements that HTML implements, like a TextBox, a Select or a Checkbox. They all share some common methods and properties like the click(), focus(), innerHTML, or hidden. With class inheritance, you can write a common class like HtmlElement and inherit the repetitive code from there.
How does this look in practice? In Dart, we use the extends keyword to inherit code from a base class. Let’s look at a short example:
// notice the extends keyword.
// we refer to the Developer class we defined in the previous snippet
class RisingStackEngineer extends Developer {
final bool cool = true;
String sunglassType;
RisingStackEngineer(String name, int experienceYears, this.sunglassType)
: super(name, experienceYears); // super() calls the parent class constructor
void describeSunglasses() {
print("$name has some dope-ass $sunglassType-type sunglasses.");
}
}
And what can this class do? Let’s look at this snippet:
void main() {
RisingStackEngineer berci = RisingStackEngineer("Bertalan", 300, "cool");
berci.describe(); // .describe(); is not defined on the RisingStackEngineer class directly - it’s inherited from the Developer class. We can still use it though!
berci.describeSunglasses(); // => Bertalan has some dope-ass cool-type sunglasses
}
Isn’t that amazing? Let’s make it even better with mixins. Mixins help you mix in more than one class into your hierarchy. For example, let’s give some keyboards for our developers:
class Keyboard {
int numberOfKeys = 101;
void describeKeyboard() {
print("The keyboard has $numberOfKeys keys.");
}
}
And use a mixin to create some sort of developer-keyboard hybrid person with Dart and the with keyword:
class WalkingKeyboard extends Developer with Keyboard {
// ...
}
And that’s it! If you want to practice Dart before we move on to our last topic for today (asynchronous programming), be sure to play around with DartPad, an online compiler made by the Dart team.
Write some statements, create some classes and maybe even inherit some code. Don’t just read - pause this article and write some code! Once you feel comfortable with these base concepts (typing your variables, writing lists, maps, using control flow statements, creating classes), we’ll move forward to asynchronous programming with Dart.
Asynchronous programming in the Dart Langauge
Writing asynchronous code is a must when communicating with a server, working with files, or using some native APIs. In JavaScript, we had callbacks and async/await for timing our code. To our luck, Dart utilizes the very same concepts and embraces async/await to avoid callback hell.
Let’s look at a callback example first:
// Promise 👉 Future
// the method return type is an asynchronous void
Future<void> printWithDelay(String message) {
// Future.delayed delays the code run with the specified duration
return Future.delayed(Duration(seconds: 1)).then((_) {
print(message);
});
}
void main() {
print("hey hi hello");
printWithDelay("this message is printed with delay");
}
And look at the very same code with async/await:
// notice that you have to add in the async keyword to be able to await a Future
Future<void> printWithDelay(String message) async {
await Future.delayed(Duration(seconds: 1));
print(message);
}
void main() {
print("hey hi hello");
printWithDelay("this message is printed with delay");
}
And that was it for the Promise 👉 Future part. If you’d like to know more about the Future API, be sure to read the documentation. But stay tuned! Dart has another API for handling asynchrony: Streams. 🤯
Streams in the Dart Language
Dart’s main advancement in asynchrony compared to many other languages is native support for streams. If you want to have a simple way to wrap your head around the difference between Futures and Streams, think of the following: Future handles “finished future” (e.g. a web API response) with a single value, while Streams handle continuous future (e.g. an asynchronous for loop) with zero or more values.
Consider the following chart:
How do you work with data received from Dart Streams? Whenever a new event happens in the stream (either new data is received or an error happened), Dart notifies a listener. A listener is a snippet of code that subscribes for events of a stream and processes data whenever an event is received. You can subscribe to a stream with the .listen() function, provide a callback and boom, there you go! Isn’t that easy? 🤩 Let’s look at an example to get the hang of it:
// this is an imaginative stream that gives us an integer every one second
final exampleStream = NumberCreator().stream;
// e.g. 1, 2, 3, 4, ...
// print the data received from the stream
final subscription = exampleStream.listen((data) => print(data););
By default, Dart streams only support one listener. Adding another listener to this stream would throw an exception - however, there is a tool that helps us adding multiple listeners to a single stream. Broadcast streams! You can just throw in .asBroadcastStream at the end of your stream and you’ll be able to add multiple listeners to your stream:
// same code but with a broadcast stream. Notice the .asBroadcastStream at the end!
final exampleStream = NumberCreator().stream.asBroadcastStream;
// and you’ll be fine adding multiple listeners
final subscription = exampleStream.listen((data) => print(data););
final subscription2 = exampleStream.listen((data) => print(data););
But while we’re at listeners, let’s have a closer look at that API. I mentioned that you could either receive data or an error in a stream: how can you handle errors? I made a bit more advanced listener with error handling below. You can also run code when a stream finishes sending data (won’t send data anymore), you can explicitly define if you want to cancel listening when an error occurs, and a lot more. Here’s the code:
final advancedSubscription = exampleStream.listen(
// this runs when new data is received
(data) {
print("data: $data");
},
// handle errors when one occurs
onError: (err) {
print("error: $err");
},
// do not cancel the subscription when an error occurs
cancelOnError: false,
// when the stream finishes, run some code.
onDone: () {
print("done!");
}
);
Oh, and if this wouldn’t be enough for you, you can do stuff with the subscription object itself too:
advancedSubscription.pause(); // pause the subscription
advancedSubscription.resume(); // resume the subscription
advancedSubscription.cancel(); // remove/cancel the subscription
There is still a lot more that can be done with streams in Dart: you can manipulate them, filter their data, and of course, we didn’t have a look at asynchronous iterators and creating streams - however, this should be just enough for you to start development with Flutter.
If you want to know more about asynchrony in Dart, check out the following videos made by the Flutter team:
And that’s it for asynchronous programming - for now!
Summing our beginner Dart tutorial up
Congratulations on making it this far into the course! 🎉 If it was a bit dry or heavy for you, don’t worry: this was a Dart-only episode. In this episode, we looked at a crap ton of stuff! We went from variables, types, and control flow statements to lists, maps, imports, and exports.
Then, we came to the heavier parts of the Dart ecosystem. We first had a look at why OOP exists, what are its pros, where it performs well, and then we looked at classes, inheritance, and mixins, and if that wouldn’t be enough, we even looked at asynchrony, callbacks, async/await and streams.
Don’t forget: if you want to practice all these new stuff we just learned about, you can always hit up DartPad and play around with it for a bit. (I even encourage you to do so as you’ll need to have a strong Dart knowledge to move on to Flutter).
In the next episode, we’ll look into Flutter: we’ll start with the CLI and a hello world app, and have a look at widgets, lists, styling, state management, props, routing, and networking - and in the last episode, we’ll put it all together and build a fun game. Until then, stay tuned!
Wir haben gerade einen Patch für C ++ Builder 10.4.1 veröffentlicht, der sich auf die Verwendung von in C ++ geschriebenen Komponenten im Formular-Designer auswirkt. Dieser Patch behebt das folgende Problem:
Ereignishandler wurden in der IDE nicht immer mit einer kompatiblen Methodensignatur für den Ereignishandlertyp (RSP-29734) generiert.
Wenn Sie eine Komponente verwenden, die mit dem klassischen Compiler im Formulardesigner der IDE kompiliert wurde, generieren Sie einen Ereignishandler in der IDE (z. B. von Durch Doppelklicken auf einen Ereignishandlereintrag im Objektinspektor wurde häufig eine Methode mit einer mit dem Ereignis nicht kompatiblen Signatur erstellt, die den Fehler „Eigenschaft und Methode sind nicht kompatibel“ verursachte. Dies ist in diesem Hotfix behoben.
Sie sollten Ihre C ++ – Komponentenpakete (Design und Laufzeit) nach der Installation dieses Hotfixes neu erstellen oder eine aktualisierte Version von Ihrem Komponentenhersteller erhalten.
Patch installieren
Der Patch kann automatisch von der IDE installiert werden. Wenn Sie RAD Studio oder C ++ Builder öffnen, wird im Begrüßungsbildschirm ein Hinweis angezeigt, dass ein Update verfügbar ist. Wenn Sie darauf klicken, wird GetIt geöffnet. Sie können GetIt auch über das Menüelement Extras> Menüpunkt GetIt Package Manager öffnen und nach der Kategorie „Patches und Hotfixes“ suchen.
Da dieser Patch die von der IDE geladenen Dateien überschreibt, wird die IDE vor der Installation geschlossen. Dies ist das erste Mal, dass wir einen Patch veröffentlichen, den die IDE installiert, um die von der IDE selbst verwendeten Dateien zu ändern. Dies ist Teil unserer in 10.4 begonnenen Überarbeitung der Patch-Verteilung. Es ist großartige Technologie!
Die IDE wird geschlossen und einige Befehlszeilenfenster werden geöffnet. Achten Sie in der Taskleiste auf eine blinkende Eingabeaufforderung zum Erhöhen von Berechtigungen, da das Installationsprogramm erhöhte Berechtigungen benötigt, um Dateien in Ihrem Ordner „Programme“ zu installieren.
Warten Sie einige Sekunden, und unser Patch-Tool wird ausgeführt, gefolgt vom Neustart der IDE. Erledigt!
Wenn Sie nicht möchten, dass die IDE den Patch installiert, können Sie ihn auch auf das Portal my.embarcadero.com herunterladen und manuell installieren. Wir empfehlen jedoch, die Installation innerhalb der IDE durchzuführen. Es ist viel einfacher und nach der Installation erkennt die IDE, dass sie installiert ist, und fordert Sie nicht mehr auf.
Beachten Sie die von Clang erstellten Komponenten und Ereignishandler
Hinweis: Komponenten, die mit dem Clang-basierten Compiler erstellt wurden, haben auch Probleme beim Generieren von Ereignishandlern. Wir empfehlen derzeit, dass jede C ++ – Komponente, die für die Verwendung zur Entwurfszeit vorgesehen ist, mit dem klassischen Compiler erstellt wird. Dies ist sowohl das Design- als auch das Laufzeitpaket. Jede Komponente, die nicht für die Verwendung im Formular-Designer vorgesehen ist, kann mit dem Clang-Compiler erstellt werden.
Wir planen, dieses Problem zu beheben – Clang sollte in jeder Hinsicht ein vollständiger Ersatz für Classic sein. (In 10.4 wurde beispielsweise ein neuer Debugger eingeführt, der sicherstellt, dass dieser besser als der klassische ist.)
Seit der Einführung des GetIt Package Managers in RAD Studio XE8 konzentriert sich Embarcadero darauf, es zu einem großartigen Tool für die Verteilung zusätzlicher Bibliotheken, Komponenten, Tools, Beispiele, Demoprojekte, Stile und mehr an RAD Studio-Kunden zu machen. Wir verwenden es, um Embarcadero-Add-Ons, aber auch Open Source-Bibliotheken und kostenlose Angebote von Drittanbietern sowie Testversionen kostenpflichtiger Komponenten zu vertreiben. Die GetIt-Engine wird auch zur Produktinstallation und zur Bereitstellung von Patches verwendet.
Bis vor kurzem war es nur in der RAD Studio IDE selbst möglich, GetIt-Pakete und -Kategorien zu entdecken und zu navigieren sowie mehr über den gesamten GetIt-Inhaltskatalog zu erfahren.
Wir freuen uns , Ihnen eine neue öffentlich zugängliche Website mit dem Namen https://getitnow.embarcadero.com vorstellen zu können, auf der der GetIt-Inhalt für die neueste RAD Studio-Version aufgelistet ist und die es Benutzern ermöglicht, problemlos in denselben Kategorien zu navigieren und nach Produkt und Anbieter zu filtern. Diese neue Website fragt den GetIt-Server ab, um sicherzustellen, dass der Inhalt im Laufe der Zeit ausgerichtet bleibt.
Die GetItNow-Site
Wenn Sie https://getitnow.embarcadero.com/ öffnen , wird im Dialogfeld GetIt Package Manager eine Liste angezeigt, die der Darstellung von RAD Studio ähnelt.
Standardmäßig listet die Site alle Pakete nach Datum sortiert auf, wobei die zuletzt erstellten oder geänderten Komponenten zuerst angezeigt werden. Im obigen Screenshot sehen Sie die Komponenten nach Namen sortiert.
Das seitliche Navigationsfeld bietet auch die Möglichkeit, nach unterstützten Produkten, nach Update-Abonnements und nach Kategorien zu filtern. Es gibt auch ein Suchfeld, in dem Sie nach Paketnamen suchen können. Beim Öffnen einer Komponente wird eine Detailansicht mit weiteren Informationen angezeigt, einschließlich Lizenzdetails
Beachten Sie, dass das Bild auf der Seite nur ein Platzhalter ist und dass die angezeigten Informationen nicht den Hinweis enthalten, ob das Paket in allen SKUs oder nur in den Enterprise- und Architect-SKUs verfügbar ist. Am Ende der Seite sehen Sie zusätzliche Pakete von denselben Anbietern und verwandten Paketen (in diesem Fall zeige ich Informationen zum Alien Invasion-Spiel an):
Die Seite für jede Komponente verfügt über eine eindeutige URL wie https://getitnow.embarcadero.com/AlienInvasion-1.11/, sodass Sie einen Link mit einem anderen Entwickler oder in sozialen Medien teilen können. Wir haben einige zusätzliche Anstrengungen unternommen, um sicherzustellen, dass beim Verknüpfen mit den Seiten in sozialen Medien auf der Website eine schöne Vorschau angezeigt wird. Verwenden Sie dazu die Share- Links für Twitter und Facebook auf der Paketseite.
Seitennavigation
Die Site verfügt über zusätzliche Navigationsoptionen, die Sie über die Hauptmenüelemente im oberen Banner erreichen können:
Wir freuen uns über den Wert, den der Start der GetIt Portal-Website für unsere Kunden bietet. Es erleichtert das Durchsuchen von GetIt-Inhalten und das Teilen mit anderen Entwicklern und ermöglicht es uns, alle großartigen Pakete aus unserem Drittanbieter-Ökosystem hervorzuheben, die in GetIt verfügbar sind. Wir haben Pläne, die Website in Zukunft zu erweitern, und freuen uns über Ihr Feedback.
Desde que introdujo GetIt Package Manager en RAD Studio XE8, Embarcadero se ha centrado en convertirlo en una gran herramienta para distribuir bibliotecas, componentes, herramientas, muestras, proyectos de demostración, estilos y más para los clientes de RAD Studio. Lo usamos para distribuir complementos de Embarcadero, pero también bibliotecas de código abierto de terceros y ofertas gratuitas, y versiones de prueba de componentes pagos. El motor GetIt también se usa para la instalación de productos y para entregar parches.
Hasta hace poco, descubrir y navegar por los paquetes y categorías de GetIt y aprender más sobre todo el catálogo de contenido de GetIt solo era posible desde el propio IDE de RAD Studio.
Nos complace anunciar un nuevo sitio web público, https://getitnow.embarcadero.com, que enumera el contenido de GetIt para la última versión de RAD Studio y permite a los usuarios navegar fácilmente por las mismas categorías y filtrar por producto y proveedor. Este nuevo sitio web consulta el servidor GetIt para asegurarse de que el contenido permanezca alineado a lo largo del tiempo.
El sitio GetItNow
Al abrir https://getitnow.embarcadero.com/, verá una lista similar a la que presenta RAD Studio en el cuadro de diálogo GetIt Package Manager.
De forma predeterminada, el sitio enumera todos los paquetes, ordenados por fecha, mostrando primero los componentes creados o modificados más recientes. En la captura de pantalla anterior, verá los componentes ordenados por nombre.
El panel de navegación lateral también ofrece la posibilidad de filtrar por producto compatible, suscripción de actualización y categoría. También hay un campo de búsqueda que le permite buscar por nombre de paquete. Al abrir un componente, verá una vista detallada con más información, incluidos los detalles de la licencia:
Tenga en cuenta que la imagen del lateral es solo un marcador de posición y que la información que se muestra no incluye la indicación de si el paquete está disponible en todos los SKU o solo en los de Enterprise y Architect. En la parte inferior de la página, puede ver paquetes adicionales de los mismos proveedores y paquetes relacionados (en estos casos, estoy mostrando información para el juego Alien Invasion):
La página de cada componente tiene una URL única, como https://getitnow.embarcadero.com/AlienInvasion-1.11/ para que pueda compartir un enlace con otro desarrollador o en las redes sociales. Hemos hecho un esfuerzo adicional para asegurarnos de que cuando se vincula a las páginas de las redes sociales, el sitio muestra una vista previa agradable. Para lograr esto, use los enlaces Compartir para Twitter y Facebook en la página del paquete.
sitio de navegacion
El sitio tiene opciones de navegación adicionales a las que puede acceder utilizando los elementos del menú principal en el banner superior:
Estamos entusiasmados con el valor que el lanzamiento del sitio web GetIt Portal ofrece a nuestros clientes. Hace que sea más fácil navegar por el contenido de GetIt y compartirlo con otros desarrolladores, y también nos permite destacar todos los excelentes paquetes de nuestro ecosistema de terceros que están disponibles en GetIt. Tenemos planes para ampliar el sitio en el futuro y agradecemos sus comentarios.
Desde a introdução do GetIt Package Manager no RAD Studio XE8, a Embarcadero tem se concentrado em torná-lo uma ótima ferramenta para distribuir bibliotecas adicionais, componentes, ferramentas, amostras, projetos de demonstração, estilos e muito mais para clientes RAD Studio. Nós o usamos para distribuir complementos do Embarcadero, mas também bibliotecas de código aberto de terceiros e ofertas gratuitas e versões de teste de componentes pagos. O mecanismo GetIt também é usado para instalação de produtos e para entrega de patches.
Até recentemente, descobrir e navegar pelos pacotes e categorias GetIt e aprender mais sobre todo o catálogo de conteúdo GetIt só era possível a partir do próprio IDE RAD Studio.
Temos o prazer de anunciar um novo site voltado ao público, https://getitnow.embarcadero.com, que lista o conteúdo GetIt para a versão mais recente do RAD Studio e permite aos usuários navegar facilmente nas mesmas categorias e filtrar por produto e fornecedor. Este novo site consulta o servidor GetIt para se certificar de que o conteúdo permanece alinhado ao longo do tempo.
O site GetItNow
Ao abrir https://getitnow.embarcadero.com/, você verá uma lista semelhante à que o RAD Studio apresenta na caixa de diálogo Gerenciador de pacotes GetIt.
Por padrão, o site lista todos os pacotes, classificados por data, com os componentes mais recentes criados ou modificados sendo exibidos primeiro. Na captura de tela acima, você vê os componentes classificados por nome.
O painel de navegação lateral também oferece a capacidade de filtrar por produto compatível, por assinatura de atualização e por categoria. Há também um campo de pesquisa, permitindo pesquisar por nome de pacote. Ao abrir um componente, você vê uma visão detalhada com mais informações, incluindo detalhes da licença:
Observe que a imagem ao lado é apenas um espaço reservado, e que as informações exibidas não incluem a indicação se o pacote está disponível em todos os SKUs, ou apenas nos Enterprise e Architect. Na parte inferior da página, você pode ver pacotes adicionais dos mesmos fornecedores e pacotes relacionados (neste caso, estou mostrando informações para o jogo Alien Invasion):
A página de cada componente tem um URL exclusivo, como https://getitnow.embarcadero.com/AlienInvasion-1.11/ para que você possa compartilhar um link com outro desenvolvedor ou nas redes sociais. Fizemos um esforço extra para garantir que, quando você criar um link para as páginas nas redes sociais, o site exiba uma boa visualização. Para fazer isso, use os links de compartilhamento para Twitter e Facebook na página do pacote.
Navegação no site
O site tem opções de navegação adicionais que você pode alcançar usando os itens do menu principal no banner superior:
Estamos entusiasmados com o valor que o lançamento do site do Portal GetIt oferece aos nossos clientes. Isso torna mais fácil navegar pelo conteúdo GetIt e compartilhá-lo com outros desenvolvedores, e também nos permite destacar todos os grandes pacotes de nosso ecossistema de terceiros que estão disponíveis no GetIt. Temos planos para estender o site no futuro e agradecemos seus comentários.
С момента появления GetIt Package Manager в RAD Studio XE8 Embarcadero сосредоточился на том, чтобы сделать его отличным инструментом для распространения дополнительных библиотек, компонентов, инструментов, образцов, демонстрационных проектов, стилей и многого другого для клиентов RAD Studio. Мы используем его для распространения надстроек Embarcadero, а также сторонних библиотек с открытым исходным кодом и бесплатных предложений, а также пробных версий платных компонентов. Механизм GetIt также используется для установки продуктов и доставки исправлений.
До недавнего времени находить и перемещаться по пакетам и категориям GetIt, а также узнавать больше обо всем каталоге контента GetIt можно было только в самой среде RAD Studio IDE.
Мы рады объявить о новом общедоступном веб-сайте https://getitnow.embarcadero.com, на котором перечислены материалы GetIt для последней версии RAD Studio, которые позволяют пользователям легко перемещаться по тем же категориям и фильтровать по продукту и поставщику. Этот новый веб-сайт запрашивает сервер GetIt, чтобы убедиться, что контент со временем остается выровненным.
Сайт GetItNow
Открыв https://getitnow.embarcadero.com/, вы увидите список, аналогичный тому, что RAD Studio представляет в диалоговом окне GetIt Package Manager.
По умолчанию на сайте перечислены все пакеты, отсортированные по дате, причем первыми отображаются последние созданные или измененные компоненты. На скриншоте выше вы видите компоненты, отсортированные по имени.
Боковая панель навигации также предлагает возможность фильтрации по поддерживаемым продуктам, по подписке на обновления и по категории. Также есть поле поиска, позволяющее искать по имени пакета. Когда вы открываете компонент, вы видите подробный вид с дополнительной информацией, включая сведения о лицензии:
Обратите внимание, что изображение сбоку — это просто заполнитель, и что отображаемая информация не включает указание, доступен ли пакет во всех SKU или только в Enterprise и Architect. Внизу страницы вы можете увидеть дополнительные пакеты от тех же поставщиков и связанные пакеты (в этом случае я показываю информацию для игры Alien Invasion):
Страница для каждого компонента имеет уникальный URL-адрес, например https://getitnow.embarcadero.com/AlienInvasion-1.11/, чтобы вы могли поделиться ссылкой с другим разработчиком или в социальных сетях. Мы приложили дополнительные усилия, чтобы при переходе по ссылкам на страницы в социальных сетях сайт отображал красивый предварительный просмотр. Для этого используйте ссылки «Поделиться» для Twitter и Facebook на странице пакета.
Навигация по сайту
На сайте есть дополнительные возможности навигации, к которым вы можете перейти, используя пункты главного меню в верхнем баннере:
Мы очень рады тому, какую ценность представляет запуск веб-сайта GetIt Portal для наших клиентов. Это упрощает просмотр содержимого GetIt и обмен им с другими разработчиками, а также позволяет нам выделить все отличные пакеты из нашей сторонней экосистемы, доступные в GetIt. У нас есть планы по расширению сайта в будущем, и мы будем рады вашим отзывам.
Acabamos de lanzar un parche para C ++ Builder 10.4.1 que afecta el uso de componentes escritos en C ++ en el diseñador de formularios. Este parche soluciona el siguiente problema:
Los controladores de eventos no siempre se generaron en el IDE con una firma de método compatible con el tipo de controlador de eventos (RSP-29734)
Cuando se usa un componente compilado con el compilador clásico en el diseñador de formularios del IDE, la generación de un controlador de eventos dentro del IDE (como al hacer doble clic en una entrada del controlador de eventos en el Inspector de objetos) a menudo crea un método con una firma incompatible con el evento, provocando un error “La propiedad y el método no son compatibles”. Esto se corrige en esta revisión.
Debería reconstruir sus paquetes de componentes de C ++ (diseño y tiempo de ejecución) después de instalar esta revisión u obtener una versión actualizada de su proveedor de componentes.
Instalación del parche
El IDE puede instalar automáticamente el parche. Cuando abra RAD Studio o C ++ Builder, verá una nota en la pantalla de bienvenida que indica que hay una actualización disponible. Al hacer clic en esto, se abrirá GetIt. También puede abrir GetIt a través del menú Herramientas> elemento de menú Administrador de paquetes GetIt y buscar la categoría ‘Parches y revisiones’.
Debido a que este parche sobrescribe los archivos que el IDE ha cargado, cerrará el IDE antes de instalarlo. Esta es la primera vez que lanzamos un parche que instala el IDE que modifica los archivos que el propio IDE está usando, y es parte de nuestra revisión de la distribución del parche que comenzó en 10.4. ¡Es una gran tecnología!
El IDE se cerrará y verá que se abren algunas ventanas de línea de comandos. Esté atento a un mensaje de elevación de permisos parpadeante en la barra de tareas, ya que el instalador necesita permisos elevados para instalar archivos en su carpeta Archivos de programa
Espere unos segundos y verá que nuestra herramienta de parche se ejecuta, seguida del reinicio del IDE. ¡Hecho!
Si no desea que el IDE instale el parche, también puede descargarlo en el portal my.embarcadero.com e instalarlo manualmente. Pero recomendamos instalar desde el IDE; es mucho más fácil y, una vez instalado, el IDE sabrá que está instalado y ya no le preguntará.
Nota sobre los controladores de eventos y los componentes creados por Clang
Nota: los componentes creados con el compilador basado en Clang también tienen problemas para generar controladores de eventos; actualmente recomendamos que cualquier componente de C ++ que esté diseñado para su uso en tiempo de diseño se compile con el compilador clásico. Estos son los paquetes designtime y runtime. Cualquier componente que no esté destinado a ser utilizado en el diseñador de formularios se puede construir con el compilador de Clang.
Planeamos resolver esto: Clang debería ser un reemplazo completo de Classic en todos los sentidos. (Por ejemplo, 10.4 introdujo un nuevo depurador, lo que garantiza que sea mejor que el clásico).
Acabamos de lançar um patch para C ++ Builder 10.4.1 que afeta o uso de componentes escritos em C ++ no designer de formulário. Este patch aborda o seguinte problema:
Os manipuladores de eventos nem sempre foram gerados no IDE com uma assinatura de método compatível com o tipo de manipulador de eventos (RSP-29734)
Ao usar um componente compilado com o compilador clássico no designer de formulário do IDE, a geração de um manipulador de eventos dentro do IDE (como clicar duas vezes em uma entrada do manipulador de eventos no Inspetor de objetos) geralmente cria um método com uma assinatura incompatível com o evento, causando um erro “A propriedade e o método não são compatíveis”. Isso foi corrigido neste hotfix.
Você deve reconstruir seus pacotes de componentes C ++ (design e tempo de execução) após instalar esse hotfix ou obter uma versão atualizada do fornecedor do componente.
Instalando o Patch
O patch pode ser instalado automaticamente pelo IDE. Ao abrir o RAD Studio ou C ++ Builder, você verá uma observação na tela de boas-vindas informando que uma atualização está disponível. Clicar aqui abrirá GetIt. Você também pode abrir o GetIt através do menu Ferramentas> item de menu Gerenciador de pacotes GetIt e procurar a categoria ‘Patches e hotfixes’.
Como esse patch sobrescreve os arquivos que o IDE carregou, ele fechará o IDE antes da instalação. Esta é a primeira vez que lançamos um patch que o IDE instala que modifica os arquivos que o próprio IDE está usando, e é parte de nossa revisão da distribuição do patch iniciada na versão 10.4. É uma ótima tecnologia!
O IDE será fechado e você verá algumas janelas de linha de comando sendo abertas. Fique de olho no prompt de elevação de permissões piscando na barra de tarefas, uma vez que o instalador precisa de permissões elevadas para instalar arquivos na pasta Arquivos de programas.
Aguarde alguns segundos e você verá nossa ferramenta de patch em execução, seguida da reinicialização do IDE. Feito!
Se não quiser permitir que o IDE instale o patch, você também pode baixá-lo no portal my.embarcadero.com e instalá-lo manualmente. Mas recomendamos instalar de dentro do IDE; é muito mais fácil e, depois de instalado, o IDE saberá que está instalado e não solicitará mais.
Observe os componentes e manipuladores de eventos criados pelo Clang
Nota: componentes construídos usando o compilador baseado em Clang também têm problemas ao gerar manipuladores de eventos; atualmente recomendamos que qualquer componente C ++ destinado ao uso em tempo de design seja construído com o compilador clássico. Estes são os pacotes de tempo de design e tempo de execução. Qualquer componente não destinado a ser usado no designer de formulário pode ser construído com o compilador Clang.
Planejamos resolver isso – o Clang deve ser um substituto completo do Classic em todos os sentidos. (Por exemplo, 10.4 introduziu um novo depurador, garantindo que seja melhor do que o clássico.)
Мы только что выпустили патч для C ++ Builder 10.4.1, который влияет на использование компонентов, написанных на C ++, в конструкторе форм. Этот патч устраняет следующую проблему:
Обработчики событий не всегда создавались в среде IDE с совместимой сигнатурой метода для типа обработчика событий (RSP-29734)
При использовании компонента, скомпилированного с помощью классического компилятора в конструкторе форм IDE, создание обработчика событий внутри IDE (например, путем двойного щелчка по записи обработчика событий в инспекторе объектов) часто создает метод с сигнатурой, несовместимой с событием, вызывает ошибку «Свойство и метод несовместимы». Это исправлено в этом исправлении.
После установки этого исправления необходимо перестроить пакеты компонентов C ++ (во время разработки и выполнения) или получить обновленную версию у поставщика компонентов.
Установка патча
Патч может быть автоматически установлен IDE. Когда вы откроете RAD Studio или C ++ Builder, вы увидите на экране приветствия примечание о наличии обновления. При нажатии откроется GetIt. Вы также можете открыть GetIt через меню «Инструменты»> «Диспетчер пакетов GetIt» и найти категорию «Патчи и исправления».
Поскольку этот патч перезаписывает файлы, загруженные IDE, он закроет IDE перед установкой. Это первый раз, когда мы выпустили исправление, устанавливаемое IDE, которое изменяет файлы, используемые самой IDE, и является частью нашего капитального ремонта распространения исправлений, начатого в 10.4. Это отличная технология!
IDE закроется, и вы увидите, как открываются некоторые окна командной строки. Следите за мигающим запросом повышения разрешений на панели задач, так как установщику требуются повышенные разрешения для установки файлов в папку Program Files.
Подождите несколько секунд, и вы увидите, что наш инструмент исправлений работает, а затем перезапускается среда IDE. Выполнено!
Если вы не хотите, чтобы среда IDE устанавливала исправление, вы также можете загрузить его на портале my.embarcadero.com и установить вручную. Но мы рекомендуем устанавливать из среды IDE; это намного проще, и после установки IDE будет знать, что она установлена, и больше не будет запрашивать вас.
Обратите внимание на компоненты и обработчики событий, созданные в Clang.
Примечание. Компоненты, созданные с использованием компилятора на основе Clang, также имеют проблемы с созданием обработчиков событий; в настоящее время мы рекомендуем, чтобы любой компонент C ++, предназначенный для использования во время разработки, был построен с использованием классического компилятора. Это пакеты как среды разработки, так и среды выполнения. Любой компонент, не предназначенный для использования в конструкторе форм, может быть построен с помощью компилятора Clang.
Мы планируем решить эту проблему — Clang должен стать полноценной заменой Classic во всех отношениях. (Например, 10.4 представил новый отладчик, гарантирующий, что он лучше, чем классический.)
One of the most shown feature of Delphi is its Design Editor with its ability to design forms and connect components in an easy and intuitive way. I can’t count the occasions where I have seen a form being decorated with some data controls connected to a TDatasource, which itself is connected to a TDataset descendant (see the docwiki).
Often these controls are placed on the form, but that approach is usually not recommended. While the data controls must reside on the form and the data source(s) should better be on the form, too (I will explain later why), the TDataSet descendants are better placed on a separate data module. This allows the separation of the data logic from the UI and (hopefully) encourages reuse.
This simple example has two data modules and two forms:
The data module TDbConnectDM contains the connection component and the wait cursor component required by FireDAC.
The second data module TMainDM has the dataset, a TFDQuery with its static fields, and an action with its action list. The query is connected to the TFDConnection from DBConnectDM.
The form TMainForm introduces a data source connected to the query from MainDM and a TDBGrid and TDBNavigator connected to the data source. A simple TButton to open an instance of the second form (TEditForm) completes the controls on this form.
The remaining form TEditForm also has a data source connected to the query from MainDM, a couple of TDBEdit controls connected to the data source and configured for the different fields of the query and a TButton connected to the action of MainDM.
The real magic happens during runtime where all these connections are re-established when the forms and data modules are instantiated.
The connections between all these components are responsible for the lack of actual code to write. The only code in all data modules and forms is this:
procedure TMainDM.Action1Execute(Sender: TObject);
begin
tblData.Post;
end;
procedure TMainDM.Action1Update(Sender: TObject);
begin
Action1.Caption := 'Save ' + tblDataEMP_NO.AsString;
end;
procedure TMainForm.btnEditClick(Sender: TObject);
var
instance: TEditForm;
begin
instance := TEditForm.Create(nil);
try
instance.ShowModal;
finally
instance.Free;
end;
end;
Not sure about you, but selecting a record, clicking Edit Record, closing the dialog before selecting the next record seems not very comfortable to me. Let’s change from a modal Edit window to a non-modal one working always on the current record.
Turns out that is even a bit simpler: We add the Edit Form to the auto-create forms of the application and reduce the button click event to simply show that form:
procedure TMainForm.btnEditClick(Sender: TObject);
begin
EditForm.Show;
end;
So what? Nothing new here up to now. I guess, most of you have written similar applications since ages. OK, let’s get a bit mean now.
What about having multiple edit forms open simultaneously , each with a different record?
From a users perspective this might look like a small extension of the program, but as developers we know that a dataset can have only one current record. If each edit form has to work on its own record, we need a separate dataset for each edit form.
Before someone suggests to place the dataset back into the form: No, we don’t want that! In a real world scenario there may be more datasets with complex interactions and business rules to follow. Things never are as simple as shown in a demo.
This seems also a good place to explain the TDataSource should stay in the form rule mentioned in the beginning.
The official one is: One can easily change the dataset in the data source of an individual form instance without interferring with others also using a datasource in a datamodule.
The unofficial one is: Sometimes the IDE looses these connections, f.i. when the data module cannot be opened. In that case it is easier to re-connect the dataset of one data source on the form instead the data sources of several data controls.
A valid approach is to create a new instance of the data module for each form instance and let the form components use that one. For that we declare a new property in the Edit Form of type TMainDM and create an instance in the forms constructor.
We remove the Edit Form from the auto-create forms and create a new instance in the button click event – just like before, but non-modal.
procedure TMainForm.btnEditClick(Sender: TObject);
var
instance: TEditForm;
begin
instance := TEditForm.Create(Self);
instance.Show;
end;
Each instance is owned by the main form, so they will be freed when the main form is destroyed.
Let’s see if that is sufficient.
Not quite! Although we have separate Edit forms, each with its own local data module instance, the current record still follows the dataset of the main form. Why is that?
It’s that magic we saw before, that re-connects the controls at runtime in the same way as we specified in the Designer. For some reason this magic prefers the auto-created instance of the MainDM data module over the local ones in the Edit Form.
So let’s have a quick look at the internals behind this magic.
Peeking in the DFM we see the connection from the data source to the dataset looks like this:
object dsData: TDataSource
DataSet = MainDM.tblData
Left = 180
Top = 16
end
The target of this connection is simply the qualified name of the tblData component in the MainDM data module. So the connections are somehow resolved by the components names.
But wait, the local TMainDM instances are also named MainDM. Why are they just ignored when it comes to finding the MainDM instance to connect the dataset?
The relevant code is located in the method TReader.DoFixupReferences found in System.Classes:
aPropFixup := TPropFixup(FFixups[I]);
CompName := aPropFixup.FName;
ReferenceName(CompName);
Reference := FindNestedComponent(aPropFixup.FInstanceRoot, CompName);
if (Reference = nil) and Assigned(FOnFindComponentInstance) then
FOnFindComponentInstance(Self, CompName, Reference);
The interesting find is that the reader first tries to find a component inside the root component of the reference to resolve (i.e. TEditForm): FindNestedComponent. Only when no suitable component is found, it asks for the help of FOnFindComponentInstance.
But when nested components are preferred, our local instance of TMainDM should be found first, shouldn’t it? Inspecting the code for FindNestedComponent reveals that it actually uses FindComponent inside the Edit Form to access the TMainDM instance. There are two reasons for not finding it:
The instance is not created yet
The instance has another name as we expect
We can rule out the first one as we create the instance before we call inherited inside the constructor. That puts the blame on the second one.
We can find a hint in TReader.ReadRootComponent, where a call to FindUniqueName is responsible for the name change of our local TMainDM instances. As we are not able to change the reader code (at least not willing to do so – although there are ways), we just have to cope with this name change of our local data module instances.
How can we do that? As often there is more than one way.
The easiest one is to simply rename the instance back to its original name. Unless your application relies on unique names for these data modules there should be no harm with this approach.
procedure ResetComponentName(AComponent: TComponent);
begin
AComponent.Name := AComponent.ClassName.Substring(1);
end;
constructor TEditForm.Create(AOwner: TComponent);
begin
FDataModule := TMainDM.Create(Self);
ResetComponentName(FDataModule);
inherited;
end;
We also have to synchronize the dataset in the local data module with the current record in the Main Form (We should have done that before, but it wouldn’t have helped either).
procedure TMainForm.btnEditClick(Sender: TObject);
var
instance: TEditForm;
begin
instance := TEditForm.Create(Self);
instance.DataModule.SyncRecord(MainDM);
instance.Show;
end;
procedure TMainDM.SyncRecord(Source: TMainDM);
begin
if Source = Self then Exit;
tblData.Locate(tblDataEMP_NO.FieldName, Source.tblDataEMP_NO.Value);
end;
Looks much better now, doesn’t it? And we even haven’t added much new code to make it work.
In case you cannot live with the non-unique naming of the local TMainDM instances (they all appear in the Screen.DataModules list), there is another approach: Change all the references to use the new name of the data module. Unfortunately this requires a bit more coding.
The TReader offers an event OnReferenceName, which is called directly before the FindNestedComponent call in DoFixupReferences. If we could connect a handler to this event, we could change the reference in a suitable way. So how can we get hands on the TReader instance to wire that event?
In their infinite wisdom the creators of the Delphi component architecture gave us the virtual ReadState method with the current TReader as a parameter. So we override the ReadState in TEditForm and inject our event handler.
private
FDataModule: TMainDM;
FOnReferenceName: TReferenceNameEvent;
protected
procedure AdjustReference(var Value: string; AComponent: TComponent);
procedure MyReferenceName(Reader: TReader; var Name: string);
procedure ReadState(Reader: TReader); override;
public
constructor Create(AOwner: TComponent); override;
property DataModule: TMainDM read FDataModule;
end;
procedure TEditForm.AdjustReference(var Value: string; AComponent: TComponent);
var
fromName: string;
toName: string;
begin
fromName := AComponent.ClassName.Substring(1); // Remove T
toName := AComponent.Name;
if Value = fromName then
Value := toName
else if Value.StartsWith(fromName + '.') then
Value := Value.Remove(0, fromName.Length).Insert(0, toName);
end;
procedure TEditForm.MyReferenceName(Reader: TReader; var Name: string);
begin
AdjustReference(Name, DataModule);
if Assigned(FOnReferenceName) then
FOnReferenceName(Reader, Name);
end;
procedure TEditForm.ReadState(Reader: TReader);
begin
FOnReferenceName := Reader.OnReferenceName;
try
Reader.OnReferenceName := MyReferenceName;
inherited;
finally
Reader.OnReferenceName := FOnReferenceName;
end;
end;
Compared to our previous experience that looks like quite a bit of code. For a real world scenario one should consider to place most of the code in a common ancestor class, make MyReferenceName virtual and simply override it in derived classes where necessary.
procedure MyReferenceName(Reader: TReader; var Name: string); virtual;
procedure TBaseForm.MyReferenceName(Reader: TReader; var Name: string);
begin
if Assigned(FOnReferenceName) then
FOnReferenceName(Reader, Name);
end;
procedure MyReferenceName(Reader: TReader; var Name: string); override;
procedure TEditForm.MyReferenceName(Reader: TReader; var Name: string);
begin
AdjustReference(Name, DataModule);
inherited;
end;
Finally, we found a way to use the Delphi Form Designer to connect components between forms, frames and data modules without loosing these connections for multiple, dynamically instantiated instances. A benefit of this approach is the minimal amount of code we have to write – and less code means less bugs and easier maintenance.
The sources to the program shown in this article can be downloaded here:
TweakingDFMLoading.zip.
You might have to change the database connection to something you have on your system. The sample code uses the employee.gdb database from the samples folder connected via a standard Interbase developer instance. If you use another database you probably have to change the data fields in the Edit Form.
Wir organisieren DelphiCon WorldWide als Online-Delphi-Event für 2020. Dieses Event unterscheidet sich von CodeRage of Past, da DelphiCon sich auf Delphi konzentriert.
Drei Tage – Dienstag, 17. November bis Donnerstag, 19. November.
Täglich von 9 bis 13 Uhr CST
Nur eine Spur – keine Planungskonflikte mehr!
Eine Mischung aus Live-Panels und traditionellen Sessions mit Fragen und Antworten
Мы организуем DelphiCon WorldWide как онлайн-мероприятие, посвященное Delphi, на 2020 год. Это мероприятие отличается от CodeRage прошлого, поскольку DelphiCon ориентирован на Delphi.
Три дня — вторник 17 ноября по четверг 19 ноября.
С 9:00 до 13:00 каждый день
Только один трек — никаких конфликтов расписания!
Сочетание живых панелей и традиционных сессий с вопросами и ответами
Estamos organizando o DelphiCon WorldWide como um evento online focado no Delphi para 2020. Este evento é diferente do CodeRage do passado, pois o DelphiCon é focado no Delphi.
Três dias – terça-feira, 17 de novembro a quinta-feira, dia 19.
9h às 13h CST todos os dias
Apenas uma faixa – sem mais conflitos de agendamento!
Uma mistura de painéis ao vivo e sessões tradicionais com perguntas e respostas
Estamos organizando DelphiCon WorldWide como un evento en línea centrado en Delphi para 2020. Este evento es diferente de CodeRage del pasado, ya que DelphiCon se centra en Delphi.
Tres días: martes 17 de noviembre al jueves 19 de noviembre.
De 9 a. M. A 1 p. M. CST todos los días
Solo una pista, ¡no más conflictos de programación!
Una mezcla de paneles en vivo y sesiones tradicionales con preguntas y respuestas
Daten sind zu einem der wichtigsten Aspekte eines Unternehmens geworden. Da sich Datenbanken weiterentwickeln, um große (ger) Daten, hochverfügbare Webanwendungen und komplexere Ebenen der Geschäftslogik zu unterstützen, sind die Anwendungen, die die Benutzeroberfläche und die Geschäftslogik dieser Änderung darstellen, schwieriger geworden. Für viele Anwendungsentwickler schreiben wir nicht nur Anwendungscode. Wir verbringen viel Zeit damit, Abfragen zu entwickeln und zu bearbeiten, um die Anwendungsleistung und -qualität zu verbessern. Aus diesem Grund verwenden wir RAD Studio, um diese Anwendungen für eine effizientere Entwicklung und eine schnellere Markteinführung zu erstellen.
Während wir mit RAD Studio unsere Anwendungen entwickeln können, gibt es bestimmte Aspekte, die durch andere Tools verbessert werden können, z. B. beim Visualisieren von Daten und beim Erstellen oder Analysieren von Abfragen.
Aqua Data Studio ist eine Produktivitäts-IDE für alle, die mit Daten arbeiten. Unabhängig davon, ob Sie die Datenbank entwickeln und verwalten oder nur Abfragen erstellen und ausführen müssen, handelt es sich um eine benutzerfreundliche, funktionsreiche Datenbank-IDE, mit der Sie schnell und effizient arbeiten können, um Ihre Daten abzurufen.
Multi-OS und Multi-Plattform
Entwickler sind nicht mehr an die Arbeit mit einem einzelnen Betriebssystem oder einem Datenbankserver gebunden. Einer der Hauptvorteile von Aqua Data Studio ist die Möglichkeit, nicht nur eine Verbindung zu mehr als 35 verschiedenen Datenquellen herzustellen, einschließlich Excel-Tabellen, sondern auch, dass die Software unter Windows, Mac OS und sogar Linux installiert wird.
Einfach zu generierende und zu erstellende ER-Modelle
In vielen Fällen müssen wir nicht bei Null anfangen und neue erstellen oder sogar ein ER-Modell unserer Datenbanken erstellen. Für diejenigen, die Modelle generieren oder erstellen, ist es jedoch großartig, ein benutzerfreundliches Tool zu haben, mit dem Sie das vollständige Bild Ihrer Datenbank, der Objekte und der Verbindung der Beziehungen sehen können.
Planen Sie SQL-Aufgaben
Wiederholte SQL-Jobs können zeit- und ressourcenintensiv sein. Daher ist es wichtig, die Jobaktivität für Ihre Datenquellen auf einfache Weise zu planen und zu analysieren. Die Verwendung eines einfach zu konfigurierenden SQL-Schedulers kann das Ausführen von Skripten in Zukunft erheblich vereinfachen.
Abfragen visuell erstellen
SQL ist eine sehr leistungsfähige Sprache, aber das Erstellen komplexer Abfragen kann zeitaufwändig sein, insbesondere wenn Sie Unterabfragen, Verknüpfungen, datenbankspezifische SQL-Anweisungen oder langsam laufende Abfragen haben. Mit Query Builder können Sie Abfragen mit einer einfachen und benutzerfreundlichen Drag & Drop-Oberfläche erstellen und optimieren.
Erstellen Sie Visuals für beliebige Daten
Sie können Daten mit einer Drag & Drop-Oberfläche analysieren, die einfach zu verwenden ist. Sie können Ihre Abfrageergebnisse verwenden und in Arbeitsblätter ziehen, die ansprechende Visualisierungen Ihrer Daten erstellen, bevor Sie sie zu Dashboards hinzufügen. Ziehen Sie die Datendimensionen direkt in Ihr Leerzeichen, um Ihre Daten zu visualisieren und zu analysieren.
Klicken Sie hier, wenn Sie detailliertere Videos und Ressourcen zu Aqua Data Studio anzeigen möchten.
Aqua Data Studio bietet viele weitere Funktionen. Probieren Sie es aus. Wenn Sie ein RAD Studio Architect-Benutzer mit Update-Abonnement sind, haben Sie bereits Zugriff auf Aqua Data Studio. Wenn Sie nicht über die Architect Edition verfügen, können Sie eine 14-tägige Testversion von ADS herunterladen und einen Blick darauf werfen.
Os dados tornaram-se um dos aspectos mais importantes para qualquer empresa. À medida que os bancos de dados evoluem para suportar dados grandes (ger), aplicativos da web altamente disponíveis e lidar com níveis mais complexos de lógica de negócios, os aplicativos que representam a interface do usuário e a lógica de negócios dessa mudança se tornaram mais desafiadores. Para muitos desenvolvedores de aplicativos, não apenas escrevemos o código do aplicativo. Gastamos uma quantidade razoável de tempo desenvolvendo e manipulando consultas para aumentar o desempenho e a qualidade do aplicativo. É por isso que usamos o RAD Studio para construir esses aplicativos para um desenvolvimento mais eficiente e um tempo de entrada no mercado mais rápido.
Embora o RAD Studio nos permita desenvolver nossos aplicativos, existem certos aspectos que podem ser aprimorados por outras ferramentas, como ao visualizar dados e construir ou analisar consultas.
O Aqua Data Studio é um IDE de produtividade para qualquer pessoa que trabalhe com dados. Quer você desenvolva e gerencie o banco de dados ou apenas precise construir e executar consultas, é um IDE de banco de dados rico em recursos, fácil de usar, que permite trabalhar de forma rápida e eficiente para obter seus dados.
Multi-OS e Multi-Platform
Os desenvolvedores não estão mais presos a trabalhar com um único sistema operacional ou um servidor de banco de dados. Um dos principais benefícios do Aqua Data Studio é a capacidade não apenas de se conectar a mais de 35 fontes de dados diferentes, incluindo planilhas do Excel, mas também de instalar o software no Windows, Mac OS e até no Linux.
Em muitos casos, não temos que começar do zero e construir um novo ou mesmo gerar um modelo ER de nossos bancos de dados. Mas para aqueles que têm modelos de geração ou construção, é ótimo ter uma ferramenta fácil de usar que permite ver a imagem completa de seu banco de dados, os objetos e como os relacionamentos se interconectam.
Agendar Tarefas SQL
Jobs SQL repetitivos podem consumir tempo e recursos, portanto, é importante ter uma maneira fácil de agendar e analisar a atividade de job para suas fontes de dados. Usar um escalonador sql fácil de configurar pode ajudar a tornar a execução de scripts muito mais fácil no futuro.
Crie consultas visualmente
SQL é uma linguagem muito poderosa, mas formar consultas complexas pode ser demorado, especialmente quando você tem subconsultas, junções, instruções sql específicas de banco de dados ou consultas de execução lenta. O Query Builder permite construir e otimizar consultas com uma interface de arrastar e soltar simples e fácil de usar.
Crie recursos visuais de quaisquer dados
Você pode analisar dados com uma interface de arrastar e soltar que é fácil de usar. Você pode usar os resultados da consulta e colocá-los em planilhas que criam visualizações envolventes de seus dados antes de adicioná-los aos painéis. Arraste as dimensões de dados diretamente para o seu espaço em branco para começar a visualizar e analisar seus dados.
Clique aqui se quiser ver vídeos e recursos mais detalhados no Aqua Data Studio.
O Aqua Data Studio tem muitos outros recursos, então experimente. Se você é um usuário RAD Studio Architect com Assinatura de Atualização, você já tem acesso ao Aqua Data Studio. Se você não tem a edição Architect, pode baixar uma versão de avaliação de 14 dias do ADS e dar uma olhada.
Данные стали одним из важнейших аспектов любой компании. По мере того, как базы данных развиваются для поддержки больших данных, высокодоступных веб-приложений и работы с более сложными уровнями бизнес-логики, приложения, которые представляют пользовательский интерфейс и бизнес-логику этого изменения, становятся все более сложными. Для многих разработчиков приложений мы не просто пишем код приложения. Мы тратим приличное количество времени на разработку и обработку запросов для повышения производительности и качества приложений. Вот почему мы используем RAD Studio для создания этих приложений для более эффективной разработки и более быстрого выхода на рынок.
Хотя RAD Studio позволяет нам разрабатывать наши приложения, есть определенные аспекты, которые можно улучшить с помощью других инструментов, например, при визуализации данных и построении или анализе запросов.
Aqua Data Studio — это продуктивная IDE для всех и каждого, кто работает с данными. Независимо от того, разрабатываете ли вы базу данных и управляете ею или вам просто нужно создавать и выполнять запросы, это простая в использовании многофункциональная среда IDE для баз данных, которая позволяет вам работать быстро и эффективно для получения данных.
Мульти-ОС и Мульти-платформенность
Разработчики больше не привязаны к работе с одной ОС или одним сервером базы данных. Одним из основных преимуществ Aqua Data Studio является возможность не только подключаться к более чем 35 различным источникам данных, включая электронные таблицы Excel, но также возможность установки программного обеспечения в Windows, Mac OS и даже Linux.
Во многих случаях нам не нужно начинать с нуля и создавать новую или даже создавать ER-модель наших баз данных. Но для тех, у кого есть создание или построение моделей, было бы здорово иметь простой в использовании инструмент, который позволяет вам увидеть полную картину вашей базы данных, объектов и того, как взаимосвязаны отношения.
Планирование задач SQL
Повторяющиеся задания SQL могут потребовать времени и ресурсов, поэтому важно иметь простой способ планирования и анализа активности заданий для ваших источников данных. Использование простого в настройке планировщика sql может значительно упростить выполнение сценариев в будущем.
Визуально строить запросы
SQL — очень мощный язык, но формирование сложных запросов может занять много времени, особенно если у вас есть подзапросы, объединения, SQL-запросы, специфичные для базы данных, или медленно выполняющиеся запросы. Конструктор запросов позволяет создавать и оптимизировать запросы с помощью простого и удобного интерфейса перетаскивания.
Создавайте визуальные эффекты любых данных
Вы можете анализировать данные с помощью простого в использовании интерфейса перетаскивания. Вы можете использовать результаты своего запроса и переносить их на рабочие листы, которые создают привлекательные визуализации ваших данных, прежде чем добавлять их на информационные панели. Перетащите измерения данных прямо в свободное пространство, чтобы начать визуализацию и анализ данных.
Щелкните здесь, если вы хотите увидеть более подробные видео и ресурсы по Aqua Data Studio.
Aqua Data Studio имеет гораздо больше возможностей, так что попробуйте. Если вы являетесь пользователем RAD Studio Architect с подпиской на обновления, у вас уже есть доступ к Aqua Data Studio. Если у вас нет версии Architect, вы можете загрузить 14-дневную пробную версию ADS и ознакомиться с ней.
Los datos se han convertido en uno de los aspectos más importantes para cualquier empresa. A medida que las bases de datos evolucionan para admitir datos grandes (ger), aplicaciones web de alta disponibilidad y lidiar con niveles más complejos de lógica empresarial, las aplicaciones que representan la interfaz de usuario y la lógica empresarial de este cambio se han vuelto más desafiantes. Para muchos desarrolladores de aplicaciones, no solo escribimos código de aplicación. Dedicamos una buena cantidad de tiempo a desarrollar y manipular consultas para mejorar el rendimiento y la calidad de las aplicaciones. Es por eso que usamos RAD Studio para crear estas aplicaciones para un desarrollo más eficiente y un tiempo de comercialización más rápido.
Si bien RAD Studio nos permite desarrollar nuestras aplicaciones, existen ciertos aspectos que pueden mejorarse con otras herramientas, como al visualizar datos y construir o analizar consultas.
Aqua Data Studio es un IDE de productividad para cualquier persona que trabaje con datos. Ya sea que desarrolle y administre la base de datos o simplemente necesite construir y ejecutar consultas, es un IDE de base de datos rico en funciones fácil de usar que le permite trabajar de manera rápida y eficiente para obtener sus datos.
Multi-SO y multiplataforma
Los desarrolladores ya no están atados a trabajar con un solo sistema operativo o un servidor de base de datos. Uno de los principales beneficios de Aqua Data Studio es la capacidad no solo de conectarse a más de 35 fuentes de datos diferentes, incluidas hojas de cálculo de Excel, sino también de que el software se instalará en Windows, Mac OS e incluso Linux.
En muchos casos, no tenemos que empezar desde cero y crear un modelo nuevo o incluso generar un modelo ER de nuestras bases de datos. Pero para aquellos que sí tienen modelos para generar o construir, es genial tener una herramienta fácil de usar que les permita ver la imagen completa de su base de datos, los objetos y cómo se interconectan las relaciones.
Programar tareas de SQL
Los trabajos SQL repetitivos pueden consumir tiempo y recursos, por lo que es importante disponer de una forma sencilla de programar y analizar la actividad del trabajo para sus fuentes de datos. El uso de un programador sql fácil de configurar puede ayudar a que la ejecución de scripts en el futuro sea mucho más fácil.
Generar consultas visualmente
SQL es un lenguaje muy poderoso, pero la creación de consultas complejas puede llevar mucho tiempo, especialmente cuando tiene subconsultas, uniones, declaraciones SQL específicas de la base de datos o consultas de ejecución lenta. Query Builder le permite crear y optimizar consultas con una interfaz de arrastrar y soltar simple y fácil de usar.
Cree imágenes de cualquier dato
Puede analizar datos con una interfaz de arrastrar y soltar que es fácil de usar. Puede usar los resultados de su consulta y convertirlos en hojas de trabajo que crean visualizaciones atractivas de sus datos antes de agregarlos a los paneles. Arrastre dimensiones de datos directamente a su espacio en blanco para comenzar a visualizar y analizar sus datos.
Haga clic aquí si desea ver videos y recursos más detallados sobre Aqua Data Studio.
Aqua Data Studio tiene muchas más funciones, así que pruébelo. Si es un usuario de RAD Studio Architect con suscripción de actualización, ya tiene acceso a Aqua Data Studio. Si no tiene la edición Architect, puede descargar una versión de prueba de 14 días de ADS y echar un vistazo.
In Chrom ist es einfach, es herunterladen zu lassen, verwenden Sie die Dropdown-Taste
gut zu wissen
Ich sehe die Folien nicht in der Download-Zip
Sie sind ein PDF
Danke für den Download-Link!
Herzlich willkommen
Warum wird „Dieser Teil des Webinars kann auf Ihrem Gerät nicht angezeigt werden“ angezeigt.
Sehr komisch. Das tut mir leid. Ich bin mir nicht sicher, warum du das bekommen würdest. Ich werde Ihnen sicher einen Wiederholungslink per E-Mail senden.
Irgendein Problem?
Keine Probleme in dieser Hinsicht.
Hallo allerseits
Hallo
Ich habe noch nicht viel mit Python gemacht, aber Python ist auch im Windows Store verfügbar. Ein Klick installieren / deinstallieren kann nützlich sein.
Wahr.
Gutes Thema, ich bin beeindruckt
Ich lerne auch viel.
Wenn ich Python in einer Clientanwendung verwenden möchte – exe für Windows (10), muss dann Python auf dem Clientcomputer installiert sein?
Ja. Sie können Python entweder mit Ihrer Anwendung verteilen oder von ihnen zur Installation verlangen.
Beim Versuch, die Demos auszuführen, wurden viele Klassenfehler angezeigt, z. B. TSynEdit hat keine Fehlermeldungen gefunden. Mache ich etwas falsch?
TSynEdit ist in GetIt verfügbar. Sie müssen es zuerst installieren.
Funktioniert diese Integration in einer Isapi-DLL, die unter IIS ausgeführt wird?
Es sollte.
Hallo, ich möchte nach der Multithread-Anwendung fragen. Kann ich python.dll für jeden Delphi-Thread initialisieren und Code parallel ausführen?
Das wird in Teil 2 behandelt
Wie können wir SynEdit-Komponenten erhalten? Ist es Open Source?
Es ist Open Source und über den GetIt-Paketmanager in der IDE verfügbar oder kann hier heruntergeladen werden
Voraussetzung ist leider keine Option. Welche Python-Distribution kann ich in meine Anwendungsinstallation aufnehmen? Wie groß ist es? MB, GB? Vielen Dank
Holen Sie sich die 8 MB
Müssen die Endbenutzer Python auf dem Zielcomputer installiert haben?
Entweder muss der Endbenutzer es vorinstalliert haben, oder Sie können die Python-DLL mit Ihrer Anwendung verteilen.
Woher bekommt man diesen Synedit? Es ist nicht in Py4D enthalten, oder?
TSynEdit befindet sich im GetIt-Paketmanager in der IDE und ist hier verfügbar
Ist es möglich, Python-Skript in einem Thread auszuführen?
Ja, aber wir werden dies im nächsten Webinar ausführlicher behandeln.
Können Sie erklären, wie die Python-Komponenten installiert sind?
Ich werde detaillierte Installationsschritte hinzufügen und weitere Informationen hier
Guten Morgen..JS
Vielen Dank!
Wird die Wiederholung für diese Sitzung verfügbar sein?
Ja, Sie erhalten eine E-Mail mit der Wiederholung und ich werde die Wiederholung für beide Hälften und zusätzliche Ressourcen hier veröffentlichen
Es ist fantastisch!
einverstanden
Ich hoffe, dass eine Möglichkeit verfügbar ist, die in der exe kompilierte DLL als Ressource zu versenden und sie dann zur Laufzeit entweder in einen temporären Ordner zu extrahieren oder als in den Speicher extrahierte Ressource zu verwenden
Theoretisch könnte man das machen.
In der CAD-App Rhinoceros verwenden sie eine kurze Version von Phyton namens Iron Phyton für die Erstellung von Plugins. Ist es möglich, diese kurze Bibliothek mit Delphi zu mischen und ein Plugin mit Delphi zu erstellen?
Ja
für, in, import – Schlüsselwörter werden nicht hervorgehoben
Es war etwas falsch mit der Syntax-Hervorhebung dort. Genau das passiert mit Live-Demos.
Müssen wir in den Projektoptionen einen Pfad zu Python hinzufügen?
Es gibt einige Optionen für die Umverteilung.
Was passiert, wenn die Syntax falsch ist?
Es gibt Feedback zu Fehlern und Sie können dies in Ihrem Programm behandeln
1. Fügen Sie dem Link einen Link zu dieser einfachen Beispieldemo hinzu?
Gibt es eine Möglichkeit, verschiedene Python-Instanzen über eine Delphi-Anwendung zu verwalten, oder handelt es sich um eine Delphi-App mit nur einer Python-Instanz?
Sie können dies über die TPythonEngine verwalten
Kann ich es in einer Web-App verwenden?
In der Theorie. Sie haben einige zusätzliche Probleme mit Webanwendungen, daher müssten Sie mit Ihrem Threading-Modell vorsichtig sein, aber wenn Sie vorsichtig sind, sollte es gut funktionieren.
Was für ein guter Moderator ist er!
Ja
Funktioniert das auch mit C ++ Builder?
Die meisten Funktionen sollten mit C ++ Builder funktionieren.
hehe, ich denke Delphi würde die Schaffung von viel besseren visuellen Schnittstellen als tkinter ermöglichen
Oh ja, ich habe die Python-Optionen zum Erstellen der GUI untersucht und sie haben mich an die Erstellung der GUI vor Delphi erinnert. Delphi ist fantastisch darin, einer Python-Anwendung eine grafische Benutzeroberfläche hinzuzufügen.
Wie werden Python-Ausnahmen behandelt? Werden PyC erstellt, wenn das Skript ausgeführt wird? Wenn nein, dann ist die zweite Ausführung in Python schneller als in Delphi
Die Komponente fängt die Fehler ab und konvertiert sie in Delphi-Ausnahmen, die Sie behandeln können.
Ich muss einen Listener für Firebase implementieren. Ich konnte Python und Bibliothek installieren, aber ich konnte den Python-Code nicht laufen lassen
Haben Sie dieses Python-Timing mit kompiliertem Python-Code verglichen?
Kompiliertes Python wäre schneller als Demo, aber es gibt andere Leistungsverbesserungen über die parallele Bibliothek. Es gibt also immer Möglichkeiten, die Leistung zu verbessern.
Ich muss einen Listener für Firebase implementieren. Ich konnte Python und Bibliothek installieren, aber ich konnte den Python-Code nicht laufen lassen
Gibt es eine Einschränkung für importierte Python-Bibliotheken? Können wir zum Beispiel opencv, matplotlib, scipy, scikit importieren?
Ja, Sie können alle diese verwenden.
Möglicherweise habe ich Informationen zur „erforderlichen Verteilungsgröße“ verpasst, die in der Installation der Delphi-Anwendung für Endbenutzer enthalten sein könnten.
ungefähr 8 MB
Ist es möglich, Variablen von Delphi an Python zu übergeben?
Sehr beeindruckend! Wenn ich richtig gesehen habe, gibt es im Moment einige Einschränkungen für FreePascal / Lazarus bezüglich der Behandlung von Variantenänderungen.
Ja
Genau das ist mein Problem, die wenigen Optionen für Umverteilungen. Ich muss die Mindestgröße für den Endbenutzer finden.
Verwenden Sie die einbettbare Version und sie ist sehr klein
Woher weiß Python, wo das delphi_module erhältlich ist?
Für die heutigen Demos heißt es, aber im nächsten Webinar werden wir zeigen, wie Module für die Verwendung außerhalb von Delphi erstellt werden.
Kann ich Interaktionen mit Python aus Delphi 10.3.3 verwenden?
Ja
Es funktioniert auch mit Berlin?
Ja
Wird dieser Feed „Fragen“ später verfügbar sein? Hier gibt es einige gute Dinge.
Ja, ich werde sie mit der Wiederholung in den Blog-Beitrag aufnehmen
Kann ich Delphi-Objekte an Python übergeben und Objektmethoden in Python aufrufen?
Ja, eine Aufzeichnung wird in Kürze demonstriert, kann aber auch mit Objekt und Aufzeichnung durchgeführt werden.
Tolle
Einverstanden
Es wäre interessant zu sehen, wie Sie DLLs in Delphi erstellen können, die Sie aus reinem Python aufrufen. außerhalb von Delphi.
Ich glaube, das wird im 2. Teil in 2 Wochen behandelt.
??
Wann ist die nächste Sitzung?
in zwei Wochen gleichzeitig. Du bist bereits registriert
Ist es Multithreading-fähig?
Ja
Wie kann ich dem System mitteilen, in welchem Pfad sich diese Bibliotheken befinden, wenn ich die Python-DLLs und einige Bibliotheken zusammen mit meiner Anwendung in einem Unterverzeichnis verteilen möchte?
Ja, über TPythonEngine
Ich bin wirklich beeindruckt von dem Sprecher und der Art und Weise, wie er den Bildschirm manipulieren kann, indem er hineinzoomt und auf die nächste Seite faltet. Wie macht er das bitte?
Es wäre interessant, die Ausgabe für das Delphi-Objekt zu sehen. Ref: print (Typ (Ref)) print (dir (Ref)) print (Hilfe (Ref))
Sie sind Python-Typen
Der Vergleich von Python mit Delphi-Ausführungszeiten ist für Leute, die diese TensorFlow-, Anaconda-, Panda- und Python-Bibliotheken benötigen, sehr umständlich. Benötige ich wirklich Delphi?
Delphi macht es einfach, die GUI zu erstellen und dann Ingot-TensorFlow-Python-Bibliotheken usw. aufzurufen. Delphi macht es einfach, die GUI zu erstellen und dann Ingot-TensorFlow-Python-Bibliotheken usw. aufzurufen.
wirklich schön und einfach zu bedienen
Wirklich tolles Zeug!
einverstanden
Wird die Aufzeichnung dieser Sitzung frei zugänglich sein?
Hallo! Ermöglicht Ihnen diese Bibliothek (Python4Delphi) das nahtlose Verknüpfen und Verwenden von Python-Modulen und -Bibliotheken? Numpy zum Beispiel?
Ja. Wir werden dies in der nächsten Sitzung ausführlicher behandeln.
Können Sie ein Phyton Big Data-Funktionsbeispiel (wie SVM Support Vector Machine) zeigen, das von Delphi aufgerufen wird und Ergebnisse an Delphi zurückgibt?
Ja, in der nächsten Sitzung.
danke – das war echt interessant
einverstanden
Tolles Zeug!! Vielen Dank!
einverstanden
Richtige Entscheidung, es in zwei Sitzungen aufzuteilen! Der erste Teil war sehr informativ, schnell und schwer genug
Ja, wir haben schnell gemerkt, dass dies für eine Sitzung zu viel werden würde. Möglicherweise werden auch in Zukunft mehr Sitzungen durchgeführt.
Vielen Dank, sehr interessant!
Großartig!, Freut sich sehr auf die nächste Sitzung. Vielen Dank für diese große Anstrengung
Ich verstehe, dass Sie jede IDE verwenden können? wie PyCharm?
Ja
Wenn Sie diese DLL verteilen, können Sie die Installation von Python auf dem Zielcomputer vermeiden, oder? Wie groß ist diese Python-DLL tatsächlich?
Weniger als 8 MB
Eine kleine FMX-Demo bitte.
Wir werden in der nächsten Sitzung eine haben.
Vielen Dank, ausgezeichnete Demo!
Delphi + Python + Docker…. Das wäre interessant
Sicher, einfach genug, sicher, einfach genug
ist es möglich ein Python Modul zu verwenden?
Ja
Jim und Kiriakos:
Nur um das Publikum zu verdeutlichen …
„Python4Delphi“ ist kein Cross-Compiler von Python für Delphi … Stattdessen ist dieses Projekt definitiv für die gleichzeitige Koexistenz von Delphi mit Python in beide Richtungen ausgelegt …
Richtig?
Ja, das ist richtig.
Wird es im zweiten Webinar ein Beispiel für die Verwendung der matplotlib lib über Delphi geben?
Ja
Ich bin in Teil 1 registriert, ich sollte in Teil 2 registriert sein. O Dies ist automatisch für die Sitzung 2
Bereits registriert.
Gute Sitzung! Vielen Dank!
Einverstanden, willkommen.
Gibt es bitte ein Referenzdokument?
Es gibt einige Dokumentation hier , mit 33 Demos und diesem Webinar
Ist es möglich, eine bestimmte virtuelle Umgebung auszuwählen, die von conda erstellt wurde?
Ja
Ist es möglich, von einer Delphi-Funktion einen STRING zur Python-Ausgabe zurückzukehren?
Ja
Danke, sehr interessant.
Kann ich auf matplotlib zugreifen? Wenn ja, wie, in separaten Fenstern oder eingebettet in eine GUI, z. B. in VCL
Begleiten Sie uns in 2 Wochen
Sehr gutes Zeug!
einverstanden
Können wir dieses Webinar später noch einmal ansehen oder mit einem Kollegen teilen?
Ja
Können Sie eine Python-Liste an Delphi übergeben?
natürlich.
Tolles Webinar! Es eröffnete sich Ideen, um Python und Delphi in meine Projekte zu integrieren. Ich freue mich auf das nächste Webinar.
Ja
Kann ich von Python aus auf Datenbankobjekte wie das clientdataset zugreifen?
Ja
Das letzte Mal, dass ich an Delphi gearbeitet habe, war 1995. P4D ist ein guter Grund, nach Delphi zurückzukehren!
Ja
Vielen Dank!
Hallo, ist d4p vollständig plattformübergreifend?
Ja, aber noch kein Python auf dem Handy. Ja, aber noch kein Python auf dem Handy.
Kann ich Sublime Text verwenden?
sicher
Genial!
Danke fürs Teilen / Zeigen.
Gibt es bitte eine Klassendokumentation oder Referenz?
Verwenden Sie die Quelle
Super Intro. Ich freue mich auf die nächsten Sitzungen. Ein großes Lob an Embarcadero für die Organisation dieses Webinars.
Vielen Dank!
Jim und Kiriakos: Nur um das Publikum zu verdeutlichen … „Python4Delphi“ ist kein Cross-Compiler von Python für Delphi … Stattdessen ist dieses Projekt definitiv für die gleichzeitige Koexistenz von Delphi mit Python in beide Richtungen ausgelegt … Richtig?
richtig
Sehr interessant. (Ich habe PascalScript von RemObjects in meiner Anwendung verwendet).
Gute Sitzung!
Gibt es Schulungen zu Python4Delphi?
noch nicht, aber ich arbeite daran.
funktioniert es unter mobilen Betriebssystemen? Android & IOS?
Python funktioniert nicht auf Mobilgeräten.
Wann ist das zweite Webinar?
zwei Wochen.
Ist geplant, Python4Delphi über GetIt Package Manager zu veröffentlichen, um die Installation zu vereinfachen?
Ja.
Kann ich von Python aus auf Datenbankobjekte wie das clientdataset zugreifen?
Ja
Wie kann Delphi aus Python auf andere Weise als das in DLL kompilierte Delphi-Projekt / -Modul verwendet werden?
Ja, nächste Sitzung in zwei Wochen.
Toll! Wie kann ich Python-Pakete mit Python-DLL verteilen?
Konsultieren Sie die Python-Dokumente.
Wie viele Teilnehmer sind hier, Jim?
Viel.
Arbeitete auf Chrome auf Mac
Gutes Zeug!
Wäre es der gleiche Webinar-Link für Teil 2? Oder muss ich nach einem neuen Link suchen?
Ja
Danke:)
Muss das Management der Referenzzählung manuell erfolgen? Können zukünftige Versionen der Bibliothek dies automatisieren?
Die bevorzugten Optionen führen eine automatische Referenzzählung durch.
Benötigen Sie eine python.dll, wenn Sie eine exe-Datei ausführen?
Ja
Wie viel wird es kosten?
Free / Open Source
Ist es möglich, von Python generierte Bitmaps zurück nach Delphi zu übertragen?
Ich denke an svg-> bmp-Konvertierungen usw.
In der Theorie
Danke für die Antwort!
Applaus von einem der Zuschauer. Ihr beide macht einen guten Job!
Wird P4D in der Delphi-Community kompiliert?
Ja
Sehr cool. Exzellentes Seminar. Danke, dass du das angezogen hast.
Liebte es! Öffnet so viele Möglichkeiten! Vielen Dank!
Ist es vollständig kompatibel mit RAD Server-Code, der unter Linux Ubuntu ausgeführt wird?
Ja ja
Tolles Webinar! Vielen Dank!
spielen seit ein paar Jahren damit. Können wir ein einfaches Beispiel für die Übergabe eines Arrays an Python, die Verarbeitung in Numpy und die Rückgabe an Delphi haben?
ja, werde daran arbeiten.
Cool! Ich freue mich auf die nächste Sitzung!
Bitte bleiben Sie sicher und gesund.
Vielen Dank
Kann es auf Android und IOS laufen?
noch nicht
So viel tolles Zeug – du brauchst einen Teil 3 – die Leute wollen mehr
Wie viele Entwickler tragen zu diesem Projekt bei? Dies ist ein Muss für jeden „modernen“ Delphi-Entwickler !!! ??
tolle Arbeit, danke für diese Sitzung, wir sehen uns in der nächsten!
Ausgezeichnetes Webinar. Sehr aufregend. Ich freue mich auf Teil 2. Genau das, wonach wir gesucht haben.
Ausgezeichnetes Zeug! Ich habe definitiv vor, P4D zu verwenden. Danke und Grüße aus Israel
Unterstützt Python4Delphi Multidevice (FMX)?
Ja, MacOS, Linux und Windows. Noch kein Python auf dem Handy.
Freuen Sie sich darauf, es in naher Zukunft im Get It Package Manager zu sehen.
Wird daran arbeiten.
Ich benutze Python auf AWS. Kann ich dort Delphi Object verwenden?
Wenn Sie es dort bereitstellen, dann ja. Stellen Sie einfach ein Linux-Modul bereit.
Tolle Demo. Ich freue mich darauf, mehr zu lernen.
Ja, bitte mehr Zeit für Python-Bibliotheken !!!
Wird besorgt
15 Jahre Delphi-Nutzung, 10 Jahre Python-Nutzung… Danke für Ihren Job !!!
Muss das Management der Referenzzählung manuell erfolgen? Können zukünftige Versionen der p4d-Bibliothek dies automatisieren?
Wenn Sie die High-Level-Wrapper-Komponenten verwenden, wird die Referenzzählung automatisch durchgeführt.
Was meinen Sie mit Python-Funktionen, auf die in Low-Level-Code von Dephi aus zugegriffen wird?
Delphi kann die Python-Funktionen direkt aufrufen.
Was können Sie sagen, welche Hauptvorteile der Verwendung von P4D gegenüber der Entwicklung von reinen Python-Projekten für maschinelles Lernen?
Verwenden Sie Delphi für die Benutzeroberfläche oder andere Integrationen
Wie können wir helfen; Magst du Pull Requests? Oder zuerst Vorschläge diskutieren?
Wie auch immer Sie sich engagieren wollen, ist großartig!
Ich habe viele Dinge mit Delphi unter Windows und Linux in AWS gemacht
Ah gut!
Glaubst du, du hast Tkinter ersetzt? Bitte sag ja
Das ist sicherlich ein Verwendungsszenario.
Genau das, was ich vorschlagen wollte!
Wenn ich demo01 kompiliere, zeigt es, dass ein Fehler die DLL „python32.dll“ nicht öffnen konnte. Ich kann die DLL nicht im Quellcode finden. Wie kann ich das beheben?
Sie müssen zuerst Python installieren und sicherstellen, dass die Bitness von Python mit der Bitness Ihrer Anwendung übereinstimmt (32 vs 64 Bit). Sie können beide installieren.
Könnte es sich nach dem Erfolg des Starts der Bold-Community lohnen, einen Discord-Kanal zu organisieren? oder gibt es schon ähnliche?
Auf jeden Fall etwas zu sehen.
Kann ich Delphi-Fehler von Python behandeln?
Ja
Wenn Sie Komponenten auf hoher Ebene haben, warum benötigen Sie Komponenten auf niedriger Ebene?
Die High-Level-Komponenten verwenden RTTI, sodass Sie mit den Low-Level-Komponenten etwas mehr Kontrolle haben und den RTTI-Overhead entfernen können.
Bitte listen Sie die High-Level-Klassen und die Low-Level-Klassen auf. Ich bin mir nicht sicher, welche welche sind.
TPyDelphiWrapper ist die übergeordnete Komponente.
Ich muss weitermachen! Danke Leute! Bis später!
Kann ich Python-Code von Delphi aus debuggen?
Sie können Python-Code nicht über die Delphi-IDE debuggen, aber Sie können PyScripter zum Debuggen des Codes verwenden. Ihre Delphi-Anwendung
Können wir ein in Delphi entwickeltes Beispielmodul erstellen und mit PIP installieren?
Ich versuche, das Paket für Delphi 10.4 zu kompilieren, aber die Einheit PythonAction hat eine Menge Fehler, weil sie Ansi- und Unicode-Zeichenfolgen falsch verwendet. Ist sie in Bearbeitung?
Ist es möglich, Speicher zwischen Delphi und Python zu teilen?
Sehr interessant, danke. Ich freue mich auf den 2. Platz
wirklich tolle Informationen, vielen Dank! Bis zum nächsten Mal
Wird die Automatisierung von Python-Tests beim nächsten Mal behandelt?
wenn Java bei Delphi?
Vielen Dank!
Danke ! Gut gemacht !
Vielen Dank, sehr geschätzt !!!
Vielen Dank!
Sehr nützlich
Danke
Vielen Dank
Vielen Dank!
Gracias a ustedes. Esperamos ansioso la segunda parte
en Chrome es simplemente para permitir que se descargue, use el botón desplegable
bueno saber
No veo las diapositivas en el zip de descarga
Son un PDF
gracias por el enlace de descarga!
Bienvenido
¿Por qué aparece el mensaje “Lo siento, esta parte del seminario web no se puede ver en su dispositivo”?
Muy raro. Lo siento por eso. No estoy seguro de por qué obtendrías eso. Me aseguraré de conseguirle un enlace de reproducción por correo electrónico.
¿Algún problema?
No hay problemas en este extremo.
Hola a todos
Hola
Todavía no he hecho mucho con Python, pero Python también está disponible en la Tienda Windows. La instalación / desinstalación con un clic puede ser útil.
Cierto.
Buen tema, estoy impresionado
También estoy aprendiendo mucho.
Si quiero usar Python en una aplicación cliente, exe para Windows (10) , ¿necesito tener Python instalado en la máquina cliente?
Si. Puede distribuir Python con su aplicación o pedirles que lo instalen.
Recibiendo muchos errores de clase, por ejemplo, TSynEdit no encuentra mensajes de error al intentar ejecutar las demostraciones. ¿Estoy haciendo algo mal?
TSynEdit está disponible en GetIt, primero debes instalarlo.
¿Esta integración funcionará en una Dll de Isapi que se ejecute en IIS?
Debería.
hola, me gustaría preguntar acerca de la aplicación multiproceso: ¿puedo inicializar python.dll para cada subproceso delphi y ejecutar código en paralelo?
Eso está cubierto en la parte 2
¿Cómo podemos obtener componentes SynEdit? ¿Es de código abierto?
Es de código abierto y está disponible a través del administrador de paquetes GetIt en el IDE o descárguelo aquí Synedit
Desafortunadamente, el requisito no es una opción. ¿Qué distribución de Python puedo incluir en la instalación de mi aplicación? Cual es su tamaño MB, GB? Gracias
Obtén los 8 MB
¿Los usuarios finales necesitan tener Python instalado en la máquina de destino?
El usuario final lo necesita preinstalado o puede distribuir la DLL de Python con su aplicación.
¿Dónde conseguir ese Synedit? No está incluido con Py4D, ¿verdad?
TSynEdit está en el administrador de paquetes GetIt en el IDE y está disponible aquí
¿Es posible ejecutar un script de Python dentro de un hilo?
Sí, pero lo cubriremos con más detalles en el próximo seminario web.
¿Podría explicar cómo se instalan los componentes de Python?
Agregaré pasos de instalación detallados y más detalles aquí
Buenos dias..JS
¡Gracias!
¿La repetición estará disponible para esta sesión?
Sí, recibirás un correo electrónico con la repetición y publicaré la repetición para ambas mitades y recursos adicionales aquí.
¡Es fantástico!
convenido
Espero que una forma esté disponible para enviar el dll compilado dentro del exe como un recurso y luego extraerlo en tiempo de ejecución a alguna carpeta temporal o usarlo como un recurso extraído en memoria
En teoría, podrías hacer eso.
En la aplicación CAD Rhinoceros, utilizan una versión corta de phyton llamada iron phyton para la creación de complementos. ¿Es posible mezclar esta pequeña biblioteca con Delphi y crear un complemento con Delphi?
si
para, en, importar: las palabras clave no están resaltadas
Había algo mal con el resaltado de sintaxis allí. Eso es lo que sucede con Live Demos.
¿Necesitamos agregar una ruta a Python en las opciones del proyecto?
Hay algunas opciones de redistribución.
¿Qué pasa si la sintaxis es incorrecta?
Proporcionará comentarios sobre los errores y puede manejarlos en su programa
1. ¿Incluirá un enlace a este sencillo ejemplo de demostración en el enlace?
¿Hay alguna forma de administrar diferentes instancias de python desde la aplicación delphi o es una aplicación delphi con solo una instancia de python?
Puedes gestionar eso desde TPythonEngine
¿Puedo usarlo en una aplicación web?
En teoria. Tiene algunas preocupaciones adicionales con las aplicaciones web, por lo que debería tener cuidado con su modelo de subprocesos, pero si tiene cuidado, entonces debería funcionar bien.
¡Qué buen presentador es!
si
¿Funcionará esto también con C ++ Builder?
La mayor parte de la funcionalidad debería funcionar con C ++ Builder.
jeje, supongo que Delphi permitiría la creación de interfaces visuales mucho mejores que tkinter
Ah, sí, fui y exploré las opciones de Python para crear GUI y me recordaron la creación de GUI previa a Delphi. Delphi es fantástico para agregar GUI a una aplicación Python.
¿Cómo se manejan las excepciones de Python? ¿Se crean pyc cuando se ejecuta el script? Si no, entonces la segunda ejecución i python es más rápida que en delphi
El componente captura los errores y los convierte en excepciones de Delphi para que usted los maneje.
Necesito implementar un Listener para Firebase, pude instalar Python y la biblioteca, pero no pude dejar el código de Python ejecutándose
¿Ha comparado ese tiempo de Python con el código de Python compilado?
Python compilado sería más rápido que la demostración, pero hay otras mejoras de rendimiento a través de la biblioteca paralela. Por tanto, siempre hay opciones para mejorar el rendimiento.
Necesito implementar un Listener para Firebase, pude instalar Python y la biblioteca, pero no pude dejar el código de Python ejecutándose
¿Existe una limitación de las librerías de Python importadas? por ejemplo, ¿podemos importar opencv, matplotlib, scipy, scikit?
Sí, puedes usar todos esos.
Tal vez me perdí la información sobre el “tamaño de distribución requerido” que podría incluirse en la instalación de la aplicación delphi de los usuarios finales.
alrededor de 8 MB
¿Es posible pasar variables de Delphi a Python?
si
¿Los componentes SynEdit / TPython__ son compatibles con Delphi Seattle?
si.
¡Muy impresionante! Si he visto bien, hay algunas restricciones en FreePascal / Lazarus en este momento sobre el manejo de cambios de variantes.
si
Exactamente ese es mi problema, las pocas opciones de redistribución. Necesito encontrar el tamaño mínimo para el usuario final.
Usa la versión incrustable y es muy pequeña
¿Cómo sabe Python dónde obtener el delphi_module?
Para las demostraciones de hoy lo dice, pero en el próximo webinar mostraremos cómo crear módulos para usar fuera de Delphi.
¿Puedo usar interacciones con Python desde Delphi 10.3.3?
si
¿Funciona también con Berlín?
si
¿Estará disponible este feed de “Preguntas” más adelante? Hay algunas cosas buenas aquí.
Sí, los incluiré en la publicación del blog con la repetición.
¿Puedo pasar el objeto Delphi a Python y llamar a los métodos del objeto en Python?
sí, demostrando un Registro en breve, pero también puede hacerlo con Objeto y Registro.
Asombroso
Convenido
Sería interesante ver cómo se pueden construir DLL en Delphi que se llaman desde Python puro; fuera de Delphi.
Creo que se tratará en la segunda parte en 2 semanas.
??
¿Cuándo es la próxima sesión?
en dos semanas al mismo tiempo. Ya estás registrado
¿Es capaz de subprocesos múltiples?
si
Si quiero distribuir las DLL de Python y algunas bibliotecas junto con mi aplicación en algún subdirectorio, ¿cómo le digo al sistema en qué ruta se encuentran estas bibliotecas?
Sí, a través de TPythonEngine
Estoy realmente impresionado con el orador y la forma en que puede manipular la pantalla haciendo zoom y plegando en la página siguiente. ¿Cómo lo hace, por favor?
Sería interesante ver el resultado del objeto delphi Ref: print (type (Ref)) print (dir (Ref)) print (help (Ref))
son tipos de Python
Comparar python con los tiempos de ejecución de Delphi parece realmente incómodo para las personas que necesitan desparetaly esas bibliotecas de tensorFlow, anaconda, panda y cualquier otra biblioteca de Python. ¿Realmente necesito Delphi?
Delphi facilita la construcción de la GUI y luego llamar a las bibliotecas lingote TensorFlow Python, etc., Delphi facilita la construcción de la GUI y luego llamar a las bibliotecas lingote TensorFlow Python, etc.
realmente agradable y fácil de usar
¡Cosas realmente asombrosas!
convenido
¿La grabación de esta sesión será de libre acceso?
¡Hola! ¿Esta biblioteca (Python4Delphi) le permite vincular y usar el módulo y la biblioteca de Python sin problemas? ¿Numpy por ejemplo?
si. Cubriremos eso con más detalles en la próxima sesión.
¿Puede mostrar un ejemplo de función Phyton Big Data, como SVM Support Vector Machine) que se llama frm Delphi y devuelve resultados a Delphi?
Sí, en la próxima sesión.
gracias – eso fue realmente interesante
convenido
¡¡Buena cosa!! ¡Gracias!
convenido
¡Decisión acertada de dividirlo en dos sesiones! La primera parte fue muy informativa, rápida y lo suficientemente pesada.
Sí, rápidamente nos dimos cuenta de que esto iba a ser demasiado para una sola sesión. También puede terminar haciendo más sesiones en el futuro.
¡Muchas gracias, muy interesante!
¡Genial! Esperamos ansiosamente la próxima sesión. Gracias a todos por este gran esfuerzo
Entiendo que puede usar cualquier IDE. como PyCharm?
si
Al distribuir esta DLL, puede evitar la instalación de Python en la máquina de destino, ¿verdad? ¿Qué tan grande es realmente esta DLL de Python?
Menos de 8 MB
Una pequeña demostración de FMX, por favor.
Tendremos uno en la próxima sesión.
Gracias, excelente demo !!!!
Delphi + Python + Docker…. Eso puede ser interesante
Seguro, bastante fácil, seguro, bastante fácil
¿Es posible usar un módulo de Python?
si
Jim y Kiriakos:
Solo para aclarar a la audiencia…
“Python4Delphi” _no_ es un compilador cruzado de Python para Delphi… En cambio, este proyecto está definitivamente diseñado para la _ coexistencia simultánea de Delphi con Python_, en cualquier dirección …
¿ Verdad ?
sí, eso es exacto.
¿Habrá un ejemplo del uso de matplotlib lib a través de Delphi en el segundo seminario web?
si
Estoy registrado en la parte 1, debo registrarme en la parte 2 o esto es automático, para la sesión 2
Ya registrado.
¡Buena sesión! ¡Gracias!
De acuerdo, bienvenido.
hay un documento de referencia por favor?
Hay algo de documentación aquí , con 33 demostraciones, y este seminario web
Si es posible seleccionar un entorno virtual particular creado por conda?
si
¿Es posible regresar de una función delphi a STRING a la salida de Python?
si
Gracias, muy interesante.
¿Puedo acceder a matplotlib? Si es así, ¿cómo, en ventanas separadas o incrustado en una GUI, por ejemplo, dentro de VCL?
Únase a nosotros en 2 semanas
¡Muy buenas cosas!
convenido
¿Podremos volver a ver este seminario web más tarde o compartirlo con un colega?
si
¿Puede pasar una lista de Python a Delphi?
por supuesto.
¡Gran seminario web! Abrió de ideas para integrar Python y Delphi en mis proyectos. Esperamos el próximo seminario web.
si
¿Puedo acceder a objetos de base de datos como el clientdataset de Python?
si
La última vez que trabajé en Delphi fue en 1995. ¡P4D es una buena razón para volver a Delphi!
si
¡Gracias!
hola, ¿es d4p completamente multiplataforma?
sí, pero aún no Python en dispositivos móviles., sí, pero no Python en dispositivos móviles todavía.
¿Puedo usar Sublime Text?
Por supuesto
¡Increíble!
Gracias por compartir / mostrar.
¿Hay alguna documentación de clase o referencia por favor?
usa la fuente
Intro impresionante. Esperando las próximas sesiones. Felicitaciones a Embarcadero por organizar este seminario web.
¡Gracias!
Jim y Kiriakos: Solo para aclarar a la audiencia… “Python4Delphi” _no_ es un compilador cruzado de Python para Delphi… En cambio, este proyecto está definitivamente diseñado para la _ coexistencia simultánea de Delphi con Python_, en cualquier dirección … ¿Verdad?
correcto
Muy interesante. (Usé PascalScript de RemObjects en mi aplicación).
¡Buena Sesión!
¿Hay algún entrenamiento disponible sobre Python4Delphi?
todavía no, pero estoy trabajando en ello.
¿Funciona en sistemas operativos móviles? Android e IOS?
Python no funciona en dispositivos móviles.
¿Cuándo es el segundo seminario web?
dos semanas.
¿Está previsto publicar Python4Delphi a través de GetIt Package Manager para simplificar la instalación?
Si.
¿Puedo acceder a objetos de base de datos como el clientdataset de Python?
si
¿Cómo se puede usar delphi desde python de otra manera que el proyecto / módulo delphi compilado en dll?
sí, próxima sesión en dos semanas.
¡Excelente! ¿Cómo puedo distribuir paquetes de Python con Python dll?
Consulte los documentos de Python.
¿Cuántos asistentes hay aquí, Jim?
Mucho.
Trabajó en Chrome en mac
¡Buen material!
¿Sería el mismo enlace de seminario web para la parte 2? ¿O necesito buscar un nuevo enlace?
si
Gracias:)
¿La gestión del recuento de referencias tiene que ser manual? ¿Pueden las futuras versiones de la biblioteca automatizar esto?
Las opciones preferidas realizan el recuento automático de referencias.
¿Se necesita un python.dll cuando se ejecuta un archivo exe?
si
¿Cuanto costara?
libre / código abierto
¿Es posible transferir mapas de bits, generados por Python, de nuevo a Delphi?
Estoy pensando en conversiones svg-> bmp, etc.
En teoria
¡Gracias por su respuesta!
Aplausos de uno de los asistentes. ¡Ambos están haciendo un buen trabajo!
¿Se compilará P4D en la comunidad Delphi?
si
Muy genial. Excelente seminario. Gracias por ponértelo.
¿Es totalmente compatible con el código del servidor RAD que se ejecuta en Linux Ubuntu?
sí Sí
¡Gran seminario web! ¡Gracias!
He estado jugando con esto de forma intermitente durante algunos años. ¿Podemos tener un ejemplo simple de entregar una matriz a Python, procesar en numpy y devolver a Delphi?
sí, trabajará en eso.
¡Frio! ¡Esperamos la próxima sesión!
Por favor, manténganse seguros y sanos a todos.
Gracias
¿Puede ejecutarse en Android e IOS?
aún no
Tantas cosas geniales para cubrir, necesitas una parte 3, la gente quiere más
¿Cuántos desarrolladores están contribuyendo a este proyecto? ¡¡¡Esto es imprescindible para cualquier desarrollador de Delphi de la “era moderna” !!! ??
gran trabajo, gracias por esta sesión, ¡nos vemos en la próxima!
Excelente seminario web. Muy emocionante. Esperando la parte 2. Excamente lo que estábamos buscando.
¡Excelente material! Definitivamente tengo la intención de usar P4D. Gracias y saludos desde Israel
¿Es compatible Python4Delphi con varios dispositivos (FMX)?
sí, macOS, Linux y Windows. Aún no hay Python en dispositivos móviles.
Esperamos verlo en Get It Package Manager en un futuro próximo.
Trabajará en eso.
Utilizo python en AWS. ¿Puedo usar Delphi Object allí?
si lo implementa allí, entonces sí. Simplemente implemente un módulo de Linux.
Demostración impresionante. Tengo ganas de aprender más.
Sí, ¡más tiempo en las bibliotecas de Python, por favor!
haré
15 años de uso de Delphi, 10 años de uso de Python… ¡¡¡Gracias por tu trabajo !!!
¿La gestión del recuento de referencias tiene que ser manual? ¿Pueden las futuras versiones de la biblioteca p4d automatizar esto?
Cuando utiliza los componentes de envoltura de alto nivel, maneja el conteo de referencias automático.
¿Qué quiere decir con el acceso a las funciones de Python en código de bajo nivel desde Dephi?
Delphi puede llamar a las funciones de Python directamente.
¿Qué puedes decir sobre las principales ventajas de usar P4D frente al desarrollo de proyectos de aprendizaje automático de Python puro?
He hecho muchas cosas con Delphi en Windows y Linux en AWS
¡Ah, genial!
¿Crees que reemplazaste a Tkinter? porfavor di que si
Ese es ciertamente un senario de uso.
¡Exactamente lo que quería proponer!
Cuando compilo demo01, se muestra un error que no pudo abrir la DLL “python32.dll”, no puedo encontrar la DLL en el código fuente, ¿cómo solucionarlo?
Primero debe instalar Python y asegurarse de que el bitness de Python coincida con el bitness de su aplicación (32 vs 64 bit). Puede instalar ambos.
Después del éxito del lanzamiento de la comunidad Bold, ¿podría valer la pena organizar un canal de Discord? o hay algo similar ya?
Ciertamente, algo para mirar.
¿Puedo manejar errores de Delphi desde Python?
si
si tiene componentes de alto nivel, ¿por qué necesitaría componentes de bajo nivel?
Los componentes de alto nivel usan RTTI, por lo que los componentes de bajo nivel le brindan un poco más de control y le permiten eliminar la sobrecarga de RTTI.
Por favor enumere las clases de alto nivel y las clases de bajo nivel, no estoy seguro de cuáles son cuáles.
TPyDelphiWrapper es el componente de alto nivel.
¡Necesito continuar! ¡Gracias amigos! ¡Nos vemos más tarde!
¿Puedo depurar código Python desde Delphi?
No puede depurar el código Python desde el IDE de Delphi, pero puede usar PyScripter para depurar el código. Su aplicación Delphi
¿Podemos crear un módulo de muestra diseñado en Delphi e instalarlo con PIP?
Estoy tratando de compilar el paquete para Delphi 10.4, pero la unidad PythonAction tiene muchos errores debido al uso incorrecto de cadenas Ansi y Unicode … ¿está en progreso?
¿Es posible compartir memoria entre Delphi y Python?
Muy interesante, gracias. Esperando el segundo
realmente buena información, muchas gracias! hasta la próxima
¿Se cubrirá la automatización de pruebas de Python la próxima vez?
cuando Java en Delphi?
¡Gracias!
Gracias ! Gran trabajo !
Gracias chicos, muy apreciado !!!
¡Gracias!
Realmente util
gracias
Gracias
¡Muchas gracias!
Gracias a ustedes. Esperamos ansioso la segunda parte
Essa integração funcionará em um Isapi Dll em execução no IIS?
Deveria.
oi, gostaria de perguntar sobre o aplicativo multithread – posso inicializar python.dll para cada thread delphi e executar o código em paralelo?
Isso é abordado na parte 2
Como podemos obter os componentes SynEdit? É código aberto?
É Open Source e está disponível através do gerenciador de pacotes GetIt no IDE ou baixe aqui Synedit
A exigência não é uma opção, infelizmente. Que distribuição Python posso incluir na instalação do meu aplicativo? Qual é o seu tamanho? MB, GB? obrigado
Obtenha os 8 MB
Os usuários finais precisam ter o Python instalado na máquina de destino?
O usuário final precisa dele pré-instalado ou você pode distribuir a DLL do Python com seu aplicativo.
Onde obter esse Synedit? Não está incluído no Py4D, está?
TSynEdit está no gerenciador de pacotes GetIt no IDE e disponível aqui
É possível executar o script python dentro de um segmento?
Sim, mas abordaremos isso com mais detalhes no próximo webinar.
Você poderia explicar como os componentes do python são instalados
Vou adicionar etapas de instalação detalhadas e mais detalhes aqui
Bom dia..JS
Obrigado!
O replay estará disponível para esta sessão, por favor
Sim, você receberá um e-mail com o replay e eu irei postá-lo para as duas metades e recursos adicionais aqui
E fantastico!
acordado
Uma forma que espero estar disponível para enviar a dll compilada dentro do exe como um recurso e depois extraí-la em tempo de execução para alguma pasta temporária ou usá-la como um recurso extraído na memória
Em teoria, você poderia fazer isso.
No aplicativo CAD Rhinoceros, eles usam uma versão curta do phyton chamado iron phyton para a criação do plugin. É possível misturar esta pequena biblioteca com Delphi e criar um plugin com Delphi?
sim
para, em, importação – palavras-chave não são destacadas
Havia algo errado com o realce de sintaxe ali. É o que acontece com as Live Demos.
Precisamos adicionar um caminho para python nas opções do projeto?
Existem algumas opções de redistribuição.
o que acontece se a sintaxe estiver errada?
Ele fornecerá feedback sobre erros e você pode lidar com isso em seu programa
1. você incluirá um link para este exemplo simples de demonstração no link?
Existe uma maneira de gerenciar diferentes instâncias de python do aplicativo delphi ou é um aplicativo delphi com apenas uma instância de python?
Você pode gerenciar isso no TPythonEngine
posso usar no aplicativo da web?
Em teoria. Você tem algumas preocupações adicionais com os aplicativos da web, então você precisa ter cuidado com seu modelo de threading, mas se você for cuidadoso, ele deve funcionar bem.
Que bom apresentador ele é!
sim
Isso funcionará com o C ++ Builder também?
A maior parte da funcionalidade deve funcionar com o C ++ Builder.
hehe, acho que o Delphi permitiria a criação de interfaces visuais muito melhores do que o tkinter
Ah, sim, eu explorei as opções do Python para a criação de GUI e elas me lembraram da criação de GUI pré-Delphi. Delphi é fantástico em adicionar GUI a um aplicativo Python.
como as exceções do python são tratadas? são criados pyc quando o script é executado? Se não, então a segunda execução i python é mais rápida do que em delphi
O componente captura os erros e os converte em exceções Delphi para você manipular.
Preciso implementar um Listener para Firebase, consegui instalar o Python e a biblioteca, mas não consegui deixar o código Python em execução
Você comparou esse tempo de python ao código python compilado?
O Python compilado seria mais rápido do que o demo, mas há outras melhorias de desempenho por meio da biblioteca paralela. Portanto, sempre há opções para melhorar o desempenho.
Preciso implementar um Listener para Firebase, consegui instalar o Python e a biblioteca, mas não consegui deixar o código Python em execução
existe uma limitação de libs python importados? por exemplo, podemos importar opencv, matplotlib, scipy, scikit?
Sim, você pode usar todos eles.
Talvez eu tenha esquecido de informações sobre o “tamanho de distribuição necessário” que poderia ser incluído na instalação do aplicativo Delphi pelos usuários finais.
cerca de 8 MB
É possível passar variáveis de Delphi para Python?
sim
Os componentes SynEdit / TPython__ oferecem suporte a Delphi Seattle?
sim.
Muito impressionante! Se eu vi direito, existem algumas restrições no FreePascal / Lazarus no momento sobre o tratamento de alterações de variantes.
sim
Exatamente esse é o meu problema as poucas opções de redistribuição. Preciso encontrar o tamanho mínimo para o usuário final.
Use a versão incorporável e é muito pequena
Como o Python sabe onde obter o delphi_module?
Para as demonstrações de hoje, ele diz isso, mas no próximo webinar mostraremos como criar módulos para uso fora do Delphi.
Posso usar interações com Python do Delphi 10.3.3?
sim
Funciona também com Berlim?
sim
Este feed de “Perguntas” estará disponível mais tarde? Existem algumas coisas boas aqui.
Sim, irei incluí-los na postagem do blog com o replay
Posso passar objeto delphi para python e chamar métodos de objeto em python?
sim, demonstrando um registro em breve, mas pode fazer com objeto e registro também.
Surpreendente
Acordado
Seria interessante ver como você poderia construir DLLs em Delphi que você chama de Python puro; fora da Delphi.
Acredito que será abordado na 2ª parte em 2 semanas.
??
Quando é a próxima sessão?
em duas semanas ao mesmo tempo. Você já está registrado
É capaz de multithreading?
sim
Se eu quiser distribuir as DLLs do Python e algumas bibliotecas junto com meu aplicativo em algum subdiretório, como posso informar ao sistema em que caminho essas bibliotecas estão localizadas?
Sim, via TPythonEngine
Estou realmente impressionado com o palestrante e a maneira como ele pode manipular a tela, ampliando e dobrando para a próxima página. Como ele faz isso, por favor?
Seria interessante ver a saída para o objeto delphi Ref: print (type (Ref)) print (dir (Ref)) print (help (Ref))
eles são tipos Python
Comparar python com temporizações de execução delphi parece realmente estranho para pessoas que precisam desesperadamente de tensorFlow, anaconda, panda e quaisquer outras bibliotecas python. Eu realmente preciso do Delphi?
O Delphi facilita a construção da GUI e, em seguida, chama as bibliotecas Python do TensorFlow do lingote etc., o Delphi facilita a construção da GUI e depois chama as bibliotecas do TensorFlow Python do lingote etc.
muito bom e simples de usar
Coisas realmente incríveis!
acordado
A gravação desta sessão estará acessível gratuitamente?
Olá! Esta biblioteca (Python4Delphi) permite vincular perfeitamente e usar o módulo e a biblioteca Python? Numpy por exemplo?
sim. Abordaremos isso com mais detalhes na próxima sessão.
Você pode mostrar um exemplo de função Phyton Big Data, como SVM Support Vector Machine) sendo chamado de Delphi frm e retornando resultados para Delphi?
Sim, na próxima sessão.
obrigado – isso foi realmente interessante
acordado
Coisas boas!! Obrigado!
acordado
Decisão certa de dividir em duas sessões! A primeira parte foi muito informativa, rápida e pesada o suficiente
Sim, rapidamente percebemos que isso seria demais para uma sessão. Pode acabar fazendo mais sessões no futuro também.
Muito obrigado, muito interessante!
Ótimo !, aguardo ansiosamente a próxima sessão. Obrigado a todos por este grande esforço
Eu entendo que você pode usar qualquer IDE? gosta de PyCharm?
sim
Ao distribuir essa DLL, você pode evitar a instalação do Python na máquina de destino, certo? Qual o tamanho dessa DLL do Python?
Menos de 8 MB
Uma pequena demonstração FMX, por favor.
Teremos um na próxima sessão.
Obrigado, excelente demo !!!!
Delphi + Python + Docker…. isso seria interessante
Claro, fácil o suficiente, Claro, fácil o suficiente
é possível usar um módulo python?
sim
Jim e Kiriakos:
Só para esclarecer para o público…
“Python4Delphi” _não_ é um compilador cruzado de Python para Delphi… Em vez disso, este projeto foi definitivamente projetado para a _coexistência simultânea de Delphi com Python_, em qualquer direção…
Certo?
sim, isso é correto.
Haverá um exemplo de uso da lib matplotlib via Delphi no segundo webinar?
sim
Estou registado na parte 1, devo registar-me na parte 2 o isto é automático, para a sessão 2
Já registrado.
Boa sessão! Obrigado!
Concordo, bem-vindo.
existe um documento de referência, por favor?
Há alguma documentação aqui , com 33 demos, e este webinar
Se for possível selecionar um determinado ambiente virtual criado por conda?
sim
É possível retornar de uma função delphi um STRING para a saída python?
sim
Obrigado, muito interessante.
Posso acessar o matplotlib? Se sim, como, em janelas separadas, ou embutido em uma GUI, por exemplo, dentro do VCL
Junte-se a nós em 2 semanas
Muito bom!
acordado
Poderemos assistir a este webinar novamente mais tarde ou compartilhá-lo com um colega?
sim
Você pode passar uma lista Python para Delphi?
claro.
Ótimo webinar! Abriu novas ideias para integrar Python e Delphi em meus projetos. Ansioso pelo próximo webinar.
sim
Posso acessar objetos de banco de dados como o clientdataset do python?
sim
A última vez que trabalhei no Delphi foi em 1995. P4D é um bom motivo para voltar ao Delphi!
sim
Obrigado!
Olá, o d4p é totalmente multiplataforma?
sim, mas nenhum Python no celular ainda., sim, mas nenhum Python no celular ainda.
Posso usar o Sublime Text?
certo
Impressionante!
Obrigado por compartilhar / mostrar.
existe alguma documentação de classe ou referência, por favor?
use a fonte
Incrível introdução. Ansioso pelas próximas sessões. Parabéns à Embarcadero por organizar este webinar.
Obrigado!
Jim e Kiriakos: Só para esclarecer para o público… “Python4Delphi” _não_ é um compilador cruzado de Python para Delphi… Em vez disso, este projeto foi definitivamente projetado para a _coexistência simultânea de Delphi com Python_, em qualquer direção… Certo?
corrigir
Muito interessante. (Eu usei PascalScript de RemObjects em meu aplicativo).
Boa Sessão!
Existe algum treinamento sobre Python4Delphi disponível?
ainda não, mas estou trabalhando nisso.
funciona no sistema operacional móvel? Android e IOS?
Python não funciona em dispositivos móveis.
quando é o segundo webinar?
duas semanas.
Está planejado publicar o Python4Delphi através do Gerenciador de Pacotes GetIt para simplificar a instalação?
Sim.
Posso acessar objetos de banco de dados como o clientdataset do python?
sim
Como delphi pode ser usado a partir de python de outra forma que o projeto / módulo delphi compilado em dll?
sim, próxima sessão em duas semanas.
Ótimo! Como posso distribuir pacotes Python com dll Python?
Consulte a documentação do Python.
Quantos participantes estão aqui, Jim?
Muito.
Trabalhou no Chrome no mac
Coisa boa!
seria o mesmo link do webinar para a parte 2? Ou preciso pesquisar um novo link?
sim
Obrigado:)
o gerenciamento da contagem de referência deve ser manual? as versões futuras da biblioteca podem automatizar isso?
As opções preferidas fazem a contagem automática de referência.
há necessidade de um python.dll ao executar o arquivo exe?
sim
Quanto vai custar?
livre / código aberto
É possível transferir bitmaps, gerados por Python, de volta para Delphi?
Estou pensando em conversões svg-> bmp etc.
Em teoria
Obrigado pela resposta!
Aplausos de um dos presentes. Vocês dois estão fazendo um bom trabalho!
O P4D compilará na comunidade Delphi?
sim
Muito legal. Excelente seminário. Obrigado por colocar isso.
Amei! Abre tantas oportunidades! Obrigado!
É totalmente compatível com o código do servidor RAD em execução no Linux Ubuntu?
sim Sim
Ótimo webinar! Obrigado!
tenho brincado com isso há alguns anos. Podemos ter um exemplo simples de entrega de um array para python, processamento em numpy e devolução para delphi
sim, funcionará nisso.
Legal! Ansioso pela próxima sessão!
Por favor, fiquem todos seguros e saudáveis.
obrigado
pode ser executado no Android e IOS?
ainda não
Tantas coisas boas para cobrir – você precisa de uma parte 3 – as pessoas querem mais
Quantos desenvolvedores estão contribuindo para este projeto? Esta é uma obrigação para qualquer desenvolvedor Delphi da “era moderna” !!! ??
ótimo trabalho, obrigado por esta sessão, até a próxima!
Excelente webinar. Muito exitante. Ansioso para a parte 2. Excatly o que estávamos procurando.
Excelente! Definitivamente, pretendo usar o P4D. Obrigado e cumprimentos de Israel
que Python4Delphi é compatível com vários dispositivos (FMX)?
sim, macOS, Linux e Windows. Ainda não há Python no celular.
Esperamos vê-lo no Get It Package Manager em um futuro próximo.
Vou trabalhar nisso.
Eu uso python no AWS. Posso usar Delphi Object lá?
se você implantar lá, sim. Basta implantar um módulo Linux.
Demonstração incrível. Estou ansioso para aprender mais.
Sim, mais tempo nas bibliotecas Python, por favor !!!
vai fazer
15 anos de uso de delphi, 10 anos de uso de python… Obrigado pelo seu trabalho !!!
o gerenciamento da contagem de referência deve ser manual? as versões futuras da biblioteca p4d podem automatizar isso?
Quando você usa os componentes de invólucro de alto nível, ele lida com a contagem de referência automática.
O que você quer dizer com funções Python sendo acessadas em código de baixo nível do Dephi?
Delphi pode chamar as funções Python diretamente.
O que você pode dizer sobre quais são as principais vantagens de usar P4D em comparação ao desenvolvimento de projetos de aprendizado de máquina em Python puro?
Use Delphi para UI ou outras integrações
Como podemos ajudar; você gosta de solicitações pull? Ou primeiro discuta propostas?
Como você quer se envolver é ótimo!
Eu fiz muitas coisas com Delphi em windows e linux na AWS
Ah, ótimo!
Você acha que substituiu o Tkinter? por favor diga sim
Esse é certamente um senário de uso.
Exatamente o que eu queria propor!
Quando eu compilo o demo01, mostra um erro, não foi possível abrir a DLL “python32.dll”, não consigo encontrar a dll no código-fonte, como corrigi-lo?
Você precisa instalar o Python primeiro e certificar-se de que a quantidade de bits do Python corresponde à quantidade de bits do seu aplicativo (32 x 64 bits), você pode instalar os dois.
Após o sucesso do lançamento da comunidade Bold, pode valer a pena organizar um canal Discord? ou já existe algo semelhante?
Certamente algo para se olhar.
Posso lidar com erros delphi do python?
sim
se você tem componentes de alto nível, por que precisa de componentes de baixo nível?
Os componentes de alto nível usam RTTI, então os componentes de baixo nível fornecem um pouco mais de controle e permitem a remoção da sobrecarga do RTTI.
Por favor, liste as classes de alto nível e as classes de baixo nível, não tenho certeza quais são quais.
TPyDelphiWrapper é o componente de alto nível.
Eu preciso continuar! Obrigado pessoal! Até logo!
Posso depurar o código Python do Delphi?
Você não pode depurar o código Python do Delphi IDE, mas pode usar o PyScripter para depurar o código. Seu aplicativo Delphi
Podemos criar um módulo de amostra projetado em Delphi e instalar com PIP?
Estou tentando compilar o pacote para Delphi 10.4, mas a unidade PythonAction tem muitos bugs por causa do uso incorreto de strings Ansi e Unicode … está em andamento?
É possível compartilhar memória entre Delphi e Python
Muito interessante, obrigado. Ansioso para a 2ª
informações realmente ótimas, muito obrigado! vejo você na próxima vez
A automação de teste do python será abordada na próxima vez?
quando Java em Delphi?
Obrigado!
Obrigado ! Bom trabalho !
Obrigado pessoal, muito apreciado !!!
Obrigado!
Muito útil
THX
Obrigado
Muito obrigado!
Gracias a ustedes. Esperamos ansioso la segunda parte
в Chrome это просто разрешить его загрузку, используйте кнопку раскрывающегося списка
приятно знать
Я не вижу слайдов в архиве загрузки
Это PDF
спасибо за ссылку для скачивания!
Добро пожаловать
почему я получаю «Извините, эту часть веб-семинара нельзя просмотреть на вашем устройстве»?
Очень странный. Прости за это. Не уверен, почему вы это получили. Я обязательно пришлю вам ссылку для воспроизведения по электронной почте.
Любая проблема?
Никаких проблем с этим.
Привет всем
Здравствуйте
Я пока не особо много работал с Python, но Python также доступен в Магазине Windows. Установка / удаление в один клик может быть удобной.
Правда.
Хорошая тема, я впечатлен
Я тоже многому учусь.
Если я хочу использовать python в клиентском приложении — exe для Windows (10), тогда мне нужно установить python на клиентском компьютере?
Да. Вы можете либо распространять Python вместе с вашим приложением, либо требовать от них его установки.
Получение большого количества ошибок класса, например TSynEdit not found, сообщения об ошибках при попытке запустить демонстрационные версии. Я делаю что-то неправильно?
TSynEdit доступен в GetIt, вам необходимо сначала установить его.
Будет ли эта интеграция работать в Isapi Dll, работающей на IIS?
Должно.
привет, я хотел бы спросить о многопоточном приложении — могу ли я инициализировать python.dll для каждого потока delphi и выполнять код параллельно?
Это рассматривается в части 2
Как мы можем получить компоненты SynEdit? Это открытый исходный код?
Он имеет открытый исходный код и доступен через диспетчер пакетов GetIt в среде IDE или загрузите его здесь Synedit
К сожалению, требование не является вариантом. Какой дистрибутив Python я могу включить в установку моего приложения? Какой у него размер? МБ, ГБ? Благодарность
Получите 8 МБ
Нужно ли конечным пользователям иметь установленный Python на целевой машине?
Либо конечному пользователю она нужна предварительно, либо вы можете распространить Python DLL вместе с вашим приложением.
Где взять этот Synedit? Его нет в Py4D, не так ли?
TSynEdit находится в диспетчере пакетов GetIt в среде IDE и доступен здесь
Можно ли запустить скрипт python внутри потока?
Да, но мы рассмотрим это более подробно на следующем вебинаре.
Не могли бы вы объяснить, как устанавливаются компоненты Python?
Я добавлю подробные инструкции по установке и подробности здесь
Доброе утро … JS
Благодарность!
Будет ли доступен повтор этого сеанса, пожалуйста
Да, вы получите электронное письмо с воспроизведением, и я опубликую его для обеих половин и дополнительных ресурсов здесь
Это фантастически!
согласовано
Один из способов, я надеюсь, будет доступен для доставки dll, скомпилированной внутри exe, в качестве ресурса, а затем извлекать ее во время выполнения либо в какую-то временную папку, либо использовать ее как извлеченный ресурс в памяти
Теоретически вы могли бы это сделать.
В приложении САПР Rhinoceros для создания плагинов используется короткая версия фитона, называемая железным фитоном. Можно ли смешать эту короткую библиотеку с Delphi и создать плагин с Delphi?
да
for, in, import — ключевые слова не выделяются
Что-то не так с выделением синтаксиса. Вот что происходит с живыми демонстрациями.
Нужно ли нам добавить путь к python в параметрах проекта?
Есть несколько вариантов распространения.
что произойдет, если синтаксис неправильный?
Он предоставит обратную связь об ошибках, и вы сможете исправить это в своей программе.
1. Включите ли вы ссылку на этот простой демонстрационный пример в ссылку?
2. Могу ли я использовать это и в C ++ Builder?
Вот все образцы , и большая часть функций должна работать с C ++ Builder.
Поддерживается ли потоковая передача?
Вскоре выйдет многопоточная демонстрация.
Здравствуйте
Есть ли способ управлять разными экземплярами python из приложения delphi или это одно приложение delphi с одним экземпляром python?
Вы можете управлять этим из TPythonEngine
я могу использовать его в веб-приложении?
Теоретически. У вас есть некоторые дополнительные проблемы с веб-приложениями, поэтому вам нужно быть осторожным с вашей потоковой моделью, но если вы будете осторожны, она должна работать нормально.
Какой он хороший ведущий!
да
Будет ли это работать и с C ++ Builder?
Большинство функций должно работать с C ++ Builder.
хе-хе, я думаю, Delphi позволит создавать гораздо лучшие визуальные интерфейсы, чем tkinter
О да, я пошел и исследовал варианты Python для создания графического интерфейса, и они напомнили мне создание графического интерфейса до Delphi. Delphi великолепно добавляет графический интерфейс в приложение Python.
как обрабатываются исключения python? pyc создается при выполнении скрипта? Если нет, то второе выполнение i python быстрее, чем в delphi
Компонент перехватывает ошибки и преобразует их в исключения Delphi, чтобы вы могли их обработать.
Мне нужно реализовать Listener для Firebase, мне удалось установить python и библиотеку, но я не мог оставить работающий код python
Вы сравнивали это время Python с скомпилированным кодом Python?
Скомпилированный Python будет быстрее, чем демонстрационный, но есть и другие улучшения производительности с помощью параллельной библиотеки. Так что всегда есть варианты повышения производительности.
Мне нужно реализовать Listener для Firebase, мне удалось установить python и библиотеку, но я не мог оставить работающий код python
есть ли ограничение на импортированные библиотеки python? например, можем ли мы импортировать opencv, matplotlib, scipy, scikit?
Да, вы можете использовать все это.
Возможно, я пропустил информацию о «необходимом размере дистрибутива», который может быть включен в установку приложения delphi конечным пользователям.
около 8 МБ
Можно ли передавать переменные из Delphi в Python?
да
Поддерживают ли компоненты SynEdit / TPython__ Delphi Seattle?
да.
Очень впечатляюще! Если я правильно понял, на FreePascal / Lazarus существуют некоторые ограничения в отношении обработки изменений вариантов.
да
Собственно в этом и моя проблема — несколько вариантов перераспределения. Мне нужно найти минимальный размер для конечного пользователя.
Используйте встраиваемую версию, и она очень мала
Как Python узнает, где взять delphi_module?
В сегодняшних демонстрациях это говорится, но на следующем вебинаре мы покажем, как создавать модули для использования вне Delphi.
Могу ли я использовать взаимодействие с Python из Delphi 10.3.3?
да
Он работает и с Берлином?
да
Будет ли этот канал «Вопросы» доступен позже? Здесь есть несколько хороших моментов.
Да, я включу их в сообщение в блоге с воспроизведением
Могу ли я передать объект delphi в Python и вызвать методы объекта в Python?
да, вскоре продемонстрирую запись, но можно также использовать объекты и запись.
Удивительный
Согласовано
Было бы интересно посмотреть, как вы можете создавать библиотеки DLL в Delphi, которые вы вызываете из чистого Python; за пределами Дельфи.
Я думаю, что будет освещена вторая часть через 2 недели.
??
Когда следующий сеанс?
за две недели одновременно. Вы уже зарегистрированы
Возможна ли многопоточность?
да
Если я хочу распространить Python DLL и некоторые библиотеки вместе с моим приложением в каком-либо подкаталоге, как я могу сообщить системе, по какому пути находятся эти библиотеки?
Да, через TPythonEngine
Я действительно впечатлен выступающим и тем, как он может управлять экраном, увеличивая масштаб и переходя на следующую страницу. Как он это делает, пожалуйста?
Было бы интересно увидеть вывод для объекта delphi Ref: print (type (Ref)) print (dir (Ref)) print (help (Ref))
они типы Python
Сравнение времени выполнения python и delphi выглядит действительно неудобным для людей, отчаянно нуждающихся в этих tenorFlow, anaconda, panda и других библиотеках Python. Мне действительно нужен delphi?
Delphi упрощает создание GUI и последующий вызов библиотек Python TensorFlow и т. Д., Delphi упрощает создание GUI и последующий вызов библиотек Python TensorFlow и т. Д.
Здравствуйте! Позволяет ли эта библиотека (Python4Delphi) легко связывать и использовать модуль и библиотеку python? Numpy например?
да. Мы рассмотрим это более подробно на следующей сессии.
Можете ли вы показать пример функции Phyton Big Data, такой как SVM Support Vector Machine), которая вызывается из Delphi и возвращает результаты в Delphi?
Да, в следующем сеансе.
спасибо — это было действительно интересно
согласовано
Отличный материал!! Благодарность!
согласовано
Верное решение разбить его на две сессии! Первая часть была очень информативной, быстрой и достаточно тяжелой.
Да, мы быстро поняли, что это будет слишком много для одного сеанса. Возможно, в будущем придется проводить больше сессий.
Спасибо большое, очень интересно!
Отлично !, с нетерпением ждем следующего сеанса. Спасибо вам всем за эти огромные усилия
Я так понимаю можно использовать любую IDE? как PyCharm?
да
Распространяя эту DLL, вы можете избежать установки Python на целевой машине, верно? Насколько велика эта библиотека Python на самом деле?
Менее 8 МБ
Небольшая демонстрация FMX, пожалуйста.
У нас будет один на следующей сессии.
Спасибо, отличная демка !!!!
Delphi + Python + Docker…. это было бы интересно
Конечно, достаточно легко, Конечно, достаточно легко
можно ли использовать модуль python?
да
Джим и Кириакос:
Просто чтобы прояснить для аудитории…
«Python4Delphi» — это _не_ кросс-компилятор Python для Delphi … Вместо этого этот проект определенно разработан для _ одновременного сосуществования Delphi с Python_ в любом направлении …
Верно?
да, это верно.
Будет ли на втором вебинаре пример использования библиотеки matplotlib через Delphi?
да
Я зарегистрирован в части 1, я должен зарегистрироваться в части 2 или это происходит автоматически, для сеанса 2
Уже зарегистрирован.
Хорошая сессия! Спасибо!
Согласен, добро пожаловать.
есть справочный документ, пожалуйста?
Существует некоторая документация здесь , с 33 демок, и этот веб — семинар
Можно ли выбрать конкретную виртуальную среду, созданную conda?
да
Можно ли вернуть из функции delphi СТРОКУ в вывод python?
да
Спасибо, очень интересно.
Могу ли я получить доступ к matplotlib? Если да, то как, в отдельных окнах или встроенных в графический интерфейс, например, внутри VCL.
Присоединяйтесь к нам через 2 недели
Очень хороший материал!
согласовано
Сможем ли мы снова посмотреть этот веб-семинар позже или поделиться им с коллегой?
да
Можете ли вы передать список Python в Delphi?
конечно.
Отличный вебинар! Это открыло идеи по интеграции Python и Delphi в мои проекты. С нетерпением жду следующего вебинара.
да
Могу ли я получить доступ к объектам базы данных, таким как clientdataset, из Python?
да
Последний раз я работал над Delphi в 1995 году. P4D — хороший повод вернуться в Delphi!
да
Спасибо!
привет, d4p полностью кроссплатформенный?
да, но пока нет Python на мобильных устройствах., да, но еще нет Python на мобильных устройствах.
Могу ли я использовать Sublime Text?
конечно
Потрясающие!
Спасибо за то, что поделились / показали.
есть ли какая-либо документация по классу или ссылка, пожалуйста?
использовать источник
Отличное вступление. С нетерпением жду следующих занятий. Престижность Embarcadero за организацию этого вебинара.
Благодарность!
Джим и Кириакос: Просто чтобы прояснить для аудитории… «Python4Delphi» — это _не_ кросс-компилятор Python для Delphi … Вместо этого, этот проект определенно разработан для _ одновременного сосуществования Delphi с Python_ в любом направлении … Верно?
верный
Очень интересно. (Я использовал в своем приложении PascalScript из RemObjects).
Хорошая сессия!
Есть ли обучение по Python4Delphi?
пока нет, но я работаю над этим.
он работает на мобильной ОС? Android и IOS?
Python не работает на мобильных устройствах.
когда будет второй вебинар?
две недели.
Планируется ли опубликовать Python4Delphi через GetIt Package Manager, чтобы упростить установку?
Да.
Могу ли я получить доступ к объектам базы данных, таким как clientdataset, из Python?
да
Как можно использовать delphi из python иначе, чем проект / модуль delphi, скомпилированный в dll?
да, следующий сеанс через две недели.
Большой! Как я могу распространять пакеты Python с помощью Python dll?
Обратитесь к документации Python.
Сколько здесь посетителей, Джим?
Много.
Работал в Chrome на Mac
Хорошая вещь!
это будет та же ссылка на вебинар для части 2? Или мне нужно искать новую ссылку?
да
Спасибо:)
должно ли управление подсчетом ссылок быть ручным? могут ли будущие версии библиотеки автоматизировать это?
В предпочтительных вариантах выполняется автоматический подсчет ссылок.
нужен ли python.dll при запуске exe-файла?
да
Сколько это будет стоить?
бесплатно / с открытым исходным кодом
Можно ли передать растровые изображения, созданные Python, обратно в Delphi?
Я думаю о преобразованиях svg-> bmp и т. Д.
Теоретически
Спасибо за ответ!
Аплодисменты одного из зрителей. Вы оба делаете хорошую работу!
Будет ли P4D компилироваться в сообществе Delphi?
да
Очень круто. Отличный семинар. Спасибо, что надели это.
Очень понравилось! Открывает столько возможностей! Спасибо!
Полностью ли он совместим с кодом RAD Server, работающим в Linux Ubuntu?
да да да
Отличный вебинар! Спасибо!
играли с этим время от времени несколько лет. Можем ли мы иметь простой пример передачи массива в Python, обработки в numpy и возврата в delphi
да, буду работать над этим.
Круто! С нетерпением жду следующей сессии!
Пожалуйста, будьте здоровы и живы всем.
Благодарность
может ли он работать на Android и IOS?
еще нет
Так много интересного — вам нужна часть 3 — люди хотят большего
Сколько разработчиков участвует в этом проекте? Это необходимо для любого Delphi-разработчика «современной эпохи» !!! ??
отличная работа, спасибо за эту сессию, до встречи в следующей!
Отличный вебинар. Очень волнующе. С нетерпением жду части 2. Именно то, что мы искали.
Отличный материал! Я определенно собираюсь использовать P4D. Спасибо и привет из Израиля
поддержка Python4Delphi с несколькими устройствами (FMX)?
да, macOS, Linux и Windows. На мобильных устройствах пока нет Python.
С нетерпением жду возможности увидеть его в Get It Package Manager в ближайшем будущем.
Будем работать над этим.
Я использую python на AWS. Могу ли я использовать там объект Delphi?
если развернешь там, то да. Просто разверните модуль linux.
Замечательная демка. Я с нетерпением жду возможности узнать больше.
Да, пожалуйста, больше времени на библиотеки Python !!!
Сделаю
15 лет использования delphi, 10 лет использования python… Спасибо за вашу работу !!!
должно ли управление подсчетом ссылок быть ручным? Могут ли будущие версии библиотеки p4d автоматизировать это?
Когда вы используете компоненты оболочки высокого уровня, он автоматически обрабатывает подсчет ссылок.
Что вы имеете в виду, когда к функциям Python обращаются в низкоуровневом коде из Dephi?
Delphi может напрямую вызывать функции Python.
Что вы можете сказать, каковы основные преимущества использования P4D по сравнению с разработкой проектов машинного обучения на чистом Python?
Используйте Delphi для пользовательского интерфейса или других интеграций
Как мы можем помочь; тебе нравятся запросы на вытягивание? Или сначала обсудите предложения?
Как бы вы ни захотели принять участие — это здорово!
Я много чего проделал с Delphi для Windows и Linux в AWS.
Ах, отлично!
Как вы думаете, вы заменили Ткинтера? пожалуйста скажи да
Это, безусловно, один из вариантов использования.
Именно то, что я хотел предложить!
Когда я компилирую demo01, он показывает ошибку, не могу открыть DLL «python32.dll», я не могу найти dll в исходном коде, как это исправить?
Сначала вам нужно установить Python и убедиться, что разрядность Python соответствует разрядности вашего приложения (32 против 64 бит), вы можете установить оба.
Может быть, стоит организовать канал в Discord после успешного запуска сообщества Bold? или подобное уже есть?
Конечно, есть на что посмотреть.
Могу ли я обрабатывать ошибки delphi из Python?
да
если у вас есть компоненты высокого уровня, зачем вам нужны компоненты низкого уровня?
Компоненты высокого уровня используют RTTI, поэтому компоненты низкого уровня дают вам немного больше контроля и позволяют удалить накладные расходы RTTI.
Пожалуйста, перечислите классы высокого и низкого уровня, я не уверен, какие именно.
TPyDelphiWrapper — это компонент высокого уровня.
Мне нужно продолжать! Спасибо, народ! Увидимся!
Могу ли я отлаживать код Python из Delphi?
Вы можете отлаживать код Python из среды IDE Delphi, но вы можете использовать PyScripter для отладки кода. Ваше приложение Delphi
Можем ли мы создать образец модуля, разработанный в Delphi, и установить его с помощью PIP?
Я пытаюсь скомпилировать пакет для Delphi 10.4, но в модуле PythonAction много ошибок из-за неправильного использования строк Ansi и Unicode… это в процессе?
Можно ли разделить память между Delphi и Python
Очень интересно, спасибо. С нетерпением жду 2-го
действительно отличная информация, большое спасибо! увидимся в следующий раз
Будет ли охвачена автоматизация тестирования Python в следующий раз?
когда Java в Delphi?
Благодарность!
Спасибо ! Прекрасная работа !
Спасибо, ребята, очень признательны !!!
Спасибо!
Действительно полезно
Спасибо
Спасибо
Огромное спасибо!
Gracias a ustedes. Esperamos ansioso la segunda parte
Gnostice Document Studio Delphi ist eine Komponenten-Suite für die Dokumentverarbeitung in mehreren Formaten für Delphi und C ++ Builder. Gnostice Document Studio enthält die folgenden Funktionen:
Im Promo Pack ist die Gnostice Document Studio Embarcadero Edition enthalten . Die Embarcadero Edition enthält die voll ausgestatteten Komponenten VCL und FMX Document Viewer sowie Document Printer zum Anzeigen und Drucken von PDF-Dokumenten und Bilddateien. Zu den VCL-Zielplattformen gehören Win32 und Win64. Zu den FMX-Zielplattformen gehören Win32, Win64, macOS, iOS und Android.
Gnostice Document Studio Delphi ist in 100% Object Pascal für VCL und FireMonkey geschrieben. Es kann alle unterstützten Formate verarbeiten und anzeigen, ohne dass externe Software wie Microsoft Word, Open XML SDK, Adobe PDF-Bibliothek oder GhostScript erforderlich ist.
Update : In Blog-Posts wurden ursprünglich die falschen Edition-Funktionen aufgelistet.
Gnostice Document Studio Delphi — это набор компонентов многоформатной обработки документов для Delphi и C ++ Builder. Gnostice Document Studio включает в себя следующие функции:
В рекламный пакет входит Gnostice Document Studio Embarcadero Edition . Embarcadero Edition включает полнофункциональные компоненты VCL и FMX Document Viewer и Document Printer для просмотра и печати PDF-документов и файлов изображений. Целевые платформы VCL включают Win32 и Win64. Целевые платформы FMX включают Win32, Win64, macOS, iOS и Android.
Gnostice Document Studio Delphi написана на 100% Object Pascal как для VCL, так и для FireMonkey. Он может обрабатывать и отображать все поддерживаемые форматы, не требуя внешнего программного обеспечения, такого как Microsoft Word, Open XML SDK, Adobe PDF library или GhostScript.
Обновление : изначально в сообщении в блоге были указаны неправильные функции выпуска.
Gnostice Document Studio Delphi é um conjunto de componentes de processamento de documentos multiformato para Delphi e C ++ Builder. O Gnostice Document Studio inclui os seguintes recursos:
Incluído no Promo Pack está o Gnostice Document Studio Embarcadero Edition . O Embarcadero Edition inclui os componentes VCL e FMX Document Viewer e Document Printer completos para visualizar e imprimir documentos PDF e arquivos de imagem. As plataformas de destino VCL incluem Win32 e Win64. As plataformas de destino FMX incluem Win32, Win64, macOS, iOS e Android.
Gnostice Document Studio Delphi é escrito em 100% Object Pascal para VCL e FireMonkey. Ele pode processar e exibir todos os formatos suportados sem a necessidade de software externo, como Microsoft Word, Open XML SDK, biblioteca Adobe PDF ou GhostScript.
Atualização : a postagem do blog originalmente tinha os recursos de edição incorretos listados.
Gnostice Document Studio Delphi es una suite de componentes de procesamiento de documentos multiformato para Delphi y C ++ Builder. Gnostice Document Studio incluye las siguientes características:
En el paquete promocional se incluye la edición Embarcadero de Gnostice Document Studio . La edición Embarcadero incluye los componentes VCL y FMX Document Viewer e Impresora de documentos con todas las funciones para ver e imprimir documentos PDF y archivos de imagen. Las plataformas de destino de VCL incluyen Win32 y Win64. Las plataformas de destino FMX incluyen Win32, Win64, macOS, iOS y Android.
Gnostice Document Studio Delphi está escrito en 100% Object Pascal tanto para VCL como para FireMonkey. Puede procesar y mostrar todos los formatos admitidos sin necesidad de software externo como Microsoft Word, Open XML SDK, biblioteca de Adobe PDF o GhostScript.
Actualización : la publicación de blog originalmente incluía las funciones de edición incorrectas.
TL;DR – Se estiver sem muito tempo… siga direto para o link do GitHub ao final do artigo
Existem várias abordagens possíveis para o controle do teclado em aplicações mobile, mas na prática, podemos considerar duas principais linhas de pensamento:
Aplicar um padrão de interface de usuário onde apenas a parte superior da área útil da tela é utilizada para entrada de dados, de maneira que o teclado nunca se apresente por sobre campos de entrada de dados ou informações importantes. Particularmente (quando possível) parece-me a abordagem mais confortável para o usuário final
Utilizar toda a área útil da tela para a entrada de dados, e neste caso, manter o controle sobre a exibição dos campos e do teclado em sincronia, tornando a experiência do usuário tão confortável quanto possível
Caso você opte pela opção #2, trago aqui algumas possibilidades para facilitar o seu desenvolvimento.
A primeira delas, a qual já utilizei em alguns vídeos do Delphi Academy, é uma unit/classe, a qual adicionada ao seu projeto, assume o controle do posicionamento visual dos controles de entrada de dados, efetuando seu deslocamento em função da posição do teclado. A versão mais atualizada deste código pode ser encontrada no GitHub, como parte de um projeto Open Source maior:
E finalmente, estou compartilhando neste post uma abordagem (a qual não depende de units ou classes externas) baseada em um dos exemplos que acompanham o Delphi e o RAD Studio. Esta abordagem utiliza na base de sua implementação o TVertScrollBox, e encontra-se documentada abaixo:
Com o objetivo de tornar a técnica utilizada acima algo simples de se aplicar em um projeto real, estou compartilhando abaixo em meu GitHub dois casos de uso distintos.
O primeiro deles faz uso do conceito de herança visual, sendo que todo o código relacionado ao controle do teclado passa a residir na classe/form base, e suas heranças naturalmente passaram a assumir este comportamento de maneira automática. Isso permite até que você o aplique em um projeto pré-existente sem uma grande quantidade de refactorings.
O segundo exemplo utiliza a mesma técnica, porém apenas no formulário principal, e considera que todos as demais interfaces visuais da app (forms) serão na verdade layouts renderizados em um container da interface principal.
Deixo aqui o código de ambos para que possam explorar e eventualmente adotar/expandir o conceito:
Durch die Veröffentlichung des Quellcodes von Bold for Delphi mit einer MIT-Lizenz unter https://github.com/Embarcadero/BoldForDelphi hat Embarcadero die Bold for Delphi-Bibliothek offiziell zu einem Open Source-Projekt gemacht.
Was ist mutig (und etwas Geschichte)
Fett ist ein Tool im Bereich MDA ( Model Driven Architecture ), mit dem Sie mit einem UML-Modell Ihrer Anwendung und einer Reihe von Geschäftsregeln beginnen können, die in einer höheren Sprache geschrieben sind, und das Modell nach dem Erstellen einer grafischen Benutzeroberfläche ausführen können dafür.
Fett gedruckt umfasst eine ausgefeilte objektrelationale Zuordnungsschicht, die Möglichkeit, Daten mehreren Formaten zuzuordnen, die Synchronisierung zu ändern und vieles mehr. Es enthält eine große Anzahl von IDE-integrierten Tools und Optionen für die Arbeit mit externer UML-Modellierungssoftware.
Das Bold-Framework und die Bold-Bibliothek wurden ursprünglich von Bold Soft erstellt, später von Borland erworben und im Zeitraum von Delphi 6 und 7 als Add-On für Delphi verkauft. In den folgenden Jahren wurde die Entwicklung von Bold eingestellt, um sich auf das ECO-Framework (Enterprise Core Objects) für die .NET-Plattform zu konzentrieren. ECO wurde später von Borland an CapableObjects verkauft.
Eine Reihe von Kunden blieb auf Bold aktiv und aktualisierte es, um mit den neuesten Versionen von Delphi zu arbeiten, konnte jedoch ihre Updates aufgrund der proprietären Lizenz nicht veröffentlichen und mit anderen Entwicklern teilen.
Wie bereits erwähnt, wurde die letzte interne Version des Bold for Delphi-Quellcodes jetzt unter einer MIT-Lizenz auf GitHub veröffentlicht .
Beachten Sie, dass dies keine aktualisierte Version ist. Der veröffentlichte Code sollte mit Delphi 7 und Delphi 2006 funktionieren und nicht mit einer Unicode-Version des Produkts (seit Version Delphi 2009).
Wenn Sie sich fragen, wozu dieser alte Code verwendet wird, besteht das Hauptziel darin, Kunden, die in der Bibliothek aktiv sind, zu ermutigen, ihre neueren Versionen zu teilen, die mit Delphi 10.4 Sydney funktionieren. Dies ist eine langjährige Anfrage der aktiven Bold-Kunden.
Die Veröffentlichung einer Version, die mit Delphi 10.4 Sydney funktioniert, wird nicht sofort erfolgen, da aktive Bold-Benutzer ihre Änderungen veröffentlichen müssen. Das Veröffentlichen des internen Codes unter einer Open Source-Lizenz war wichtig, damit andere an einer neueren Version zusammenarbeiten können.
Der Community Drive
Embarcadero plant nicht, direkt an der Aktualisierung und Wartung des Bold for Delphi-Quellcodes beteiligt zu bleiben, außer der Bold-Community bei der Organisation und Förderung ihrer Bemühungen zu helfen.
Wenn Sie daran interessiert sind, sich zu engagieren und zu helfen, lassen Sie es mich bitte wissen (per E-Mail oder Kommentar) und ich kann Sie mit den Entwicklern in Kontakt bringen, die die Community-Bemühungen starten.
Опубликовав исходный код Bold для Delphi с лицензией MIT на https://github.com/Embarcadero/BoldForDelphi, Embarcadero официально сделал библиотеку Bold для Delphi проектом с открытым исходным кодом.
Что жирное (и немного истории)
Полужирный шрифт — это инструмент в пространстве MDA (Model Driven Architecture), который позволяет вам начать с UML-модели вашего приложения и набора бизнес-правил, написанных на языке высокого уровня, и «выполнить» модель после создания графического пользовательского интерфейса. для этого.
Bold включает сложный объектно-реляционный слой сопоставления, возможность отображать данные в несколько форматов, синхронизацию изменений и многое другое. Он включает в себя большое количество интегрированных в IDE инструментов и опций для работы с внешним ПО для моделирования UML.
Фреймворк и библиотека Bold были первоначально созданы Bold Soft, позже приобретены Borland и проданы как надстройка к Delphi в сроки Delphi 6 и 7. В последующие годы разработка Bold была прекращена, чтобы сосредоточиться на структуре ECO (Enterprise Core Objects) для платформы .NET. Позже ECO была продана Borland компании CapableObjects.
Некоторые клиенты оставались активными на Bold и обновляли его для работы с самыми последними версиями Delphi, но не могли выпускать и делиться своими обновлениями с другими разработчиками из-за проприетарной лицензии.
Как уже упоминалось, последняя внутренняя версия исходного кода Bold для Delphi теперь выпущена на GitHub под лицензией MIT.
Обратите внимание, что это не обновленная версия. Выпущенный код должен работать с Delphi 7 и Delphi 2006 и не будет работать с версией продукта Unicode (начиная с версии Delphi 2009).
Если вам интересно, для чего нужен этот старый код, основная цель — побудить активных пользователей библиотеки делиться своими более новыми версиями, которые работают с Delphi 10.4 Sydney. Это давняя просьба активных клиентов Bold.
Выпуск версии, которая работает с Delphi 10.4 Sydney, не произойдет сразу, поскольку публикация изменений будет зависеть от активных пользователей Bold. Публикация внутреннего кода под лицензией с открытым исходным кодом была важна для того, чтобы другие могли начать совместную работу над более новой версией.
Общественный диск
Embarcadero не планирует и дальше принимать непосредственное участие в обновлении и сопровождении исходного кода Bold для Delphi, кроме как помогать сообществу Bold в организации и продвижении их усилий.
Если вы заинтересованы в участии и помощи, дайте мне знать (по электронной почте или в комментариях), и я свяжу вас с разработчиками, которые начинают работу сообщества.
Ao publicar o código-fonte do Bold for Delphi com uma licença do MIT em https://github.com/Embarcadero/BoldForDelphi, a Embarcadero tornou oficialmente a biblioteca Bold for Delphi um projeto de código aberto.
O que é ousado (e um pouco de história)
Bold é uma ferramenta no espaço MDA (Model Driven Architecture), que permite que você comece com um modelo UML de seu aplicativo e um conjunto de regras de negócios escritas em uma linguagem de alto nível e “execute” o modelo após criar uma interface gráfica de usuário para isso.
O Bold inclui uma sofisticada camada de mapeamento objeto-relacional, capacidade de mapear dados em vários formatos, sincronização de alterações e muito mais. Inclui uma grande quantidade de ferramentas e opções integradas IDE para trabalhar com software de modelagem UML externo.
A estrutura e a biblioteca do Bold foram originalmente construídas pela Bold Soft, posteriormente adquirida pela Borland, e vendida como um complemento para o Delphi no Delphi 6 e 7 timeframe. Nos anos seguintes, o desenvolvimento do Bold foi descontinuado, para focar no framework ECO (Enterprise Core Objects) para a plataforma .NET. ECO foi posteriormente vendido pela Borland para CapableObjects.
Vários clientes permaneceram ativos no Bold e o atualizaram para funcionar com as versões mais recentes do Delphi, mas não puderam lançar e compartilhar suas atualizações com outros desenvolvedores devido à licença proprietária.
Como mencionado, a última versão interna do código-fonte do Bold for Delphi foi lançada no GitHub sob uma licença do MIT.
Observe que esta não é uma versão atualizada. O código lançado deve funcionar com Delphi 7 e Delphi 2006, e não funcionará com uma versão Unicode do produto (desde a versão Delphi 2009).
Se você está se perguntando para que serve este código antigo, o objetivo principal é encorajar os clientes ativos na biblioteca a compartilhar suas versões mais recentes, que funcionam com Delphi 10.4 Sydney. Este tem sido um pedido antigo dos clientes ativos da Bold.
O lançamento de uma versão que funciona com o Delphi 10.4 Sydney não vai acontecer imediatamente, pois vai depender de usuários ativos do Bold para publicar suas alterações. Publicar o código interno sob uma licença de código aberto foi importante para possibilitar que outros começassem a trabalhar juntos em uma versão mais recente.
The Community Drive
A Embarcadero não planeja permanecer diretamente envolvida na atualização e manutenção do código-fonte do Bold for Delphi, a não ser ajudar a comunidade Bold a organizar e promover seus esforços.
Se você estiver interessado em se envolver e ajudar, por favor me avise (por e-mail ou comentário) e posso colocá-lo em contato com os desenvolvedores que estão iniciando o esforço da comunidade.
Al publicar el código fuente de Bold para Delphi con una licencia del MIT en https://github.com/Embarcadero/BoldForDelphi, Embarcadero ha convertido oficialmente la biblioteca Bold para Delphi en un proyecto de código abierto.
Qué es audaz (y algo de historia)
Bold es una herramienta en el espacio MDA (Model Driven Architecture), que le permite comenzar con un modelo UML de su aplicación y un conjunto de reglas comerciales escritas en un lenguaje de alto nivel y “ejecutar” el modelo después de crear una interfaz gráfica de usuario. para ello.
Bold incluye una capa sofisticada de mapeo relacional de objetos, capacidad para mapear datos en múltiples formatos, sincronización de cambios y mucho más. Incluye una gran cantidad de herramientas integradas IDE y opciones para trabajar con software de modelado UML externo.
El marco y la biblioteca de Bold fueron originalmente construidos por Bold Soft, luego adquiridos por Borland, y vendidos como un complemento de Delphi en el plazo de Delphi 6 y 7. En los años siguientes, se interrumpió el desarrollo de Bold para centrarse en el marco ECO (Enterprise Core Objects) para la plataforma .NET. Posteriormente, Borland vendió ECO a CapableObjects.
Varios clientes permanecieron activos en Bold y lo actualizaron para que funcione con las versiones más recientes de Delphi, pero no pudieron publicar y compartir sus actualizaciones con otros desarrolladores debido a la licencia propietaria.
Como se mencionó, la última versión interna del código fuente de Bold para Delphi ahora se ha lanzado en GitHub bajo una licencia del MIT.
Tenga en cuenta que esta no es una versión actualizada. El código publicado debería funcionar con Delphi 7 y Delphi 2006, y no funcionará con una versión Unicode del producto (desde la versión Delphi 2009).
Si se pregunta cuál es el uso de este código antiguo, el objetivo principal es animar a los clientes activos en la biblioteca a compartir sus versiones más recientes, que funcionan con Delphi 10.4 Sydney. Esta ha sido una solicitud de larga data de los clientes activos de Bold.
El lanzamiento de una versión que funcione con Delphi 10.4 Sydney no va a suceder de inmediato, ya que dependerá de los usuarios activos de Bold para publicar sus cambios. Publicar el código interno con una licencia de código abierto era importante para que otros pudieran empezar a trabajar juntos en una versión más reciente.
La unidad comunitaria
Embarcadero no planea permanecer directamente involucrado en la actualización y mantenimiento del código fuente de Bold para Delphi, más allá de ayudar a la comunidad Bold a organizar y promover su esfuerzo.
Si está interesado en participar y ayudar, hágamelo saber (por correo electrónico o comentario) y puedo ponerlo en contacto con los desarrolladores que están comenzando el esfuerzo de la comunidad.
Преимущества подписки на обновления RAD Studio Update Subscription продолжают расширяться. В дополнение к многочисленным БЕСПЛАТНЫМ компонентам и инструментам, доступным клиентам Подписки на обновления, мы хотим выделить несколько появившихся в сентябре интересных возможностей, связанных с релизами 10.4 / 10.4.1.
Пользовательские VCL-стили Windows с поддержкой HighDPI.
В 10.4 мы значительно расширили архитектуру VCL Styles для поддержки мониторов с высоким уровнем DPI и 4K. Все элементы управления пользовательским интерфейсом на форме VCL теперь автоматически масштабируются для надлежащего разрешения монитора, на котором отображается форма. Каждый элемент пользовательского интерфейса может быть выбран из библиотеки масштабируемых версий и применен для любого DPI, в результате чего на всех мониторах четко показываются все элементы пользовательского интерфейса.
Следующие 13 пользовательских стилей VCL были обновлены для полной поддержки высокого уровня DPI в приложениях VCL:
Calypso
Stellar
Wedgewood Light
Material Oxford Blue
Puerto Rico
Material Patterns Blue
Windows 10 Modern Malibu
Windows 10 Modern Blue Whale
Windows 10 Modern Clear Day
Windows 10 Modern Black Pearl
Flat UI Light
Lucky Point
Zircon
Параллельный отладчик
Современные приложения работают не только в одном потоке — они распределены по главному потоку пользовательского интерфейса и нескольким параллельным потокам, все они взаимодействуют между собой. Тем не менее, большинство IDE построены на взаимодействии с одним потоком за раз при отладке, или имеют отладочные элементы управления, которые даже не подозревают о том, что может существовать больше одного потока.
Взаимодействие с потоками при отладке может быть сложным, действительно сложным… и у нас есть решение: новое параллельное расширение RAD Studio, предназначенное для понимания, что делает ваше многопоточное приложение, и управления им. Визуализируйте независимо стеки вызовов потоков. Наблюдайте за несколькими потоками, указанными в строке редактора кода. Контролируйте выполнение в каждом потоке. И многое другое! Это увлекательное новое расширение скоро появится исключительно для пользователей Подписки на обновления, использующих RAD Studio 10.4.1.
TwineCompile: ускорение сборки на C++
C++ действительно может медленно компилироваться, причем для всех C++ IDE — и TwineCompile является ответом. Это удивительное расширение C++Builder распараллеливает сборку C++ — проектов, расширяя ее на количество ядер в машине — делая сборку проекта короче наполовину, на четверть, а то и меньше.
Это расширение является одним из лучших расширений производительности, доступных для разработки на C++. В зависимости от размера приложения можно сэкономить час и более в день — огромная экономия времени для всей команды разработчиков!
Есть три коротких двухминутных видеоролика, представляющие TwineCompile и то, как им пользоваться: возможно, лучшие шесть минут, которые вы могли бы потратить сегодня на улучшение для вашей C++ команды.
Вы можете получить TwineCompile прямо сейчас для C++Builder при наличии активной Подписки на Обновления — в том числе и для пользователей редакции Professional!
Обновленный FMX Linux
Мы также недавно обновили пакет FMXLinux для Delphi в GetIt. Последняя версия имеет полную поддержку 10.4.1 и включает ряд улучшений качества.
Warum sollte ein Delphi-Entwickler Python zu seinem Toolbelt hinzufügen wollen? Es geht um Bibliothekszugriff und Skriptfähigkeit. Mit der Open-Source-Bibliothek Python4Delphi (P4D) von Kiriakos Vlahos, Autor der beliebten PyScripter Python IDE , können Sie als Delphi-Entwickler die gesamte Sammlung von Python-Bibliotheken direkt aus Delphi heraus nutzen. Es macht es auch einfach, Python-Skripte einfach auszuführen, neue Python-Module und neue Python-Typen direkt aus Ihrer Delphi-Anwendung heraus zu erstellen. Geben Sie Ihren Delphi-Anwendungen das Beste aus beiden Welten!
Nehmen Sie an diesem zweiteiligen Webinar teil, um zu erfahren, wie Sie Python in Ihren Delphi-Anwendungen einsetzen können, und besuchen Sie den Python4Delphi-Autor Kiriakos Vlahos und den Embarcadero-Entwickleranwalt Jim McKeeth.
TensorFlow und andere Bibliotheken verwenden Numpy intern, um mehrere Operationen an Tensoren auszuführen. Die Array-Schnittstelle ist das beste und wichtigste Merkmal von Numpy.
Insbesondere bietet es Datenstrukturen und Operationen zur Bearbeitung numerischer Tabellen und Zeitreihen.
Der Name leitet sich vom Begriff „Paneldaten“ ab, einem ökonometrischen Begriff für Datensätze, die Beobachtungen über mehrere Zeiträume für dieselben Personen umfassen.
Es bietet verschiedene Klassifizierungs-, Regressions- und Clustering-Algorithmen, darunter Support-Vektor-Maschinen, zufällige Gesamtstrukturen, Gradienten-Boosting, k-means und DBSCAN.
Eine Reihe von Bibliotheken und Programmen für die symbolische und statistische Verarbeitung natürlicher Sprache (NLP) für Englisch
Beabsichtigt, Forschung und Lehre in NLP oder eng verwandten Bereichen zu unterstützen, einschließlich empirischer Linguistik, Kognitionswissenschaft, künstlicher Intelligenz, Informationsbeschaffung und maschinellem Lernen
Wird in Naturwissenschaften, Mathematik und Ingenieurwissenschaften verwendet
Enthält Module für Optimierung, lineare Algebra, Integration, Interpolation, Sonderfunktionen, FFT, Signal- und Bildverarbeitung, ODE-Löser und andere in Wissenschaft und Technik übliche Aufgaben.
Matplotlib& Seabornzum Plotten und zur Visualisierung statistischer Daten
Зачем разработчику Delphi добавлять Python в свой набор инструментов? Все дело в доступе к библиотеке и возможности создания сценариев. Библиотека Python4Delphi (P4D) с открытым исходным кодом от Кириакоса Влахоса, автора популярной среды разработки Python PyScripter, позволяет вам как разработчику Delphi использовать всю коллекцию библиотек Python непосредственно из Delphi. Он также упрощает выполнение сценариев Python, создание новых модулей Python и новых типов Python непосредственно из вашего приложения Delphi. Дайте своим приложениям Delphi лучшее из обоих миров!
Присоединяйтесь к автору Python4Delphi Кириакосу Влахосу и адвокату разработчиков Embarcadero Джиму МакКиту на этом вебинаре из 2 частей, чтобы узнать, как использовать Python в ваших приложениях Delphi.
TensorFlow и другие библиотеки используют Numpy внутри для выполнения нескольких операций с тензорами. Интерфейс массива — лучшая и самая важная особенность Numpy.
В частности, он предлагает структуры данных и операции для управления числовыми таблицами и временными рядами.
Название происходит от термина «панельные данные», эконометрического термина для наборов данных, которые включают наблюдения за несколько периодов времени за одними и теми же людьми.
Он включает различные алгоритмы классификации, регрессии и кластеризации, включая вспомогательные векторные машины, случайные леса, повышение градиента, k-средних и DBSCAN.
Набор библиотек и программ для символьной и статистической обработки естественного языка (NLP) для английского языка
Предназначен для поддержки исследований и преподавания в области НЛП или тесно связанных областей, включая эмпирическую лингвистику, когнитивную науку, искусственный интеллект, поиск информации и машинное обучение.
Используется в естественных науках, математике и инженерии
Содержит модули для оптимизации, линейной алгебры, интеграции, интерполяции, специальных функций, БПФ, обработки сигналов и изображений, решателей ODE и других задач, распространенных в науке и технике.
Matplotlibи Seabornдля построения графиков и визуализации статистических данных
Por que um desenvolvedor Delphi deseja adicionar Python ao seu conjunto de ferramentas? É tudo sobre acesso à biblioteca e capacidade de script. A biblioteca de código aberto Python4Delphi (P4D) de Kiriakos Vlahos, autor do popular PyScripter Python IDE, permite que você, como desenvolvedor Delphi, aproveite toda a coleção de bibliotecas Python diretamente do Delphi. Também torna fácil executar scripts Python, criar novos módulos Python e novos tipos Python diretamente de seu aplicativo Delphi. Dê aos seus aplicativos Delphi o melhor dos dois mundos!
Junte-se ao autor do Python4Delphi, Kiriakos Vlahos, e ao Embarcadero Developer Advocate Jim McKeeth, neste webinar de 2 partes para aprender como aproveitar o Python em seus aplicativos Delphi.
O TensorFlow e outras bibliotecas usam o Numpy internamente para realizar várias operações em Tensores. A interface de array é o melhor e mais importante recurso do Numpy.
Em particular, ele oferece estruturas de dados e operações para manipular tabelas numéricas e séries temporais.
O nome é derivado do termo “dados de painel”, um termo econométrico para conjuntos de dados que incluem observações ao longo de vários períodos de tempo para os mesmos indivíduos.
Possui vários algoritmos de classificação, regressão e agrupamento, incluindo máquinas de vetores de suporte, florestas aleatórias, aumento de gradiente, k-médias e DBSCAN.
Um conjunto de bibliotecas e programas para processamento de linguagem natural simbólica e estatística (PNL) para inglês
Destina-se a apoiar a pesquisa e o ensino em PNL ou áreas estreitamente relacionadas, incluindo linguística empírica, ciência cognitiva, inteligência artificial, recuperação de informações e aprendizado de máquina
Contém módulos para otimização, álgebra linear, integração, interpolação, funções especiais, FFT, processamento de sinais e imagens, solucionadores de ODE e outras tarefas comuns na ciência e engenharia.
Matplotlib& Seabornpara plotagem e visualização de dados estatísticos
¿Por qué un desarrollador de Delphi querría agregar Python a su cinturón de herramientas? Se trata de acceso a la biblioteca y capacidad de escritura. La biblioteca de código abierto Python4Delphi (P4D) de Kiriakos Vlahos, autor del popular PyScripter Python IDE, le permite, como desarrollador de Delphi, aprovechar toda la colección de bibliotecas de Python directamente desde Delphi. También facilita la ejecución sencilla de scripts de Python, la creación de nuevos módulos de Python y nuevos tipos de Python directamente desde su aplicación Delphi. ¡Dé a sus aplicaciones Delphi lo mejor de ambos mundos!
Únase al autor de Python4Delphi, Kiriakos Vlahos, y al promotor de desarrolladores de Embarcadero, Jim McKeeth, en este seminario web de 2 partes para aprender cómo aprovechar Python en sus aplicaciones Delphi.
Actualización: Debido a que hubo tanto interés, estamos haciendo que este sea un seminario web de dos partes: Combinando las fortalezas de Delphi y Python.
Uso de bibliotecas y objetos de Python en código Delphi
Análisis de datos basado en Python en aplicaciones Delphi
TensorFlow, desarrollado por Google en colaboración con Brain Team, se utiliza en casi todas las aplicaciones de Google para el aprendizaje automático.
Las redes neuronales se pueden expresar fácilmente como gráficos computacionales utilizando TensorFlow como una serie de operaciones en tensores.
TensorFlow y otras bibliotecas usan Numpy internamente para realizar múltiples operaciones en tensores. La interfaz de matriz es la mejor y más importante característica de Numpy.
En particular, ofrece estructuras de datos y operaciones para manipular tablas numéricas y series de tiempo.
El nombre se deriva del término “datos de panel”, un término econométrico para conjuntos de datos que incluyen observaciones durante múltiples períodos de tiempo para las mismas personas.
Cuenta con varios algoritmos de clasificación, regresión y agrupación que incluyen máquinas de vectores de soporte, bosques aleatorios, aumento de gradiente, k-medias y DBSCAN.
Un conjunto de bibliotecas y programas para el procesamiento del lenguaje natural (PNL) simbólico y estadístico para inglés
Destinado a apoyar la investigación y la enseñanza en PNL o áreas estrechamente relacionadas, incluida la lingüística empírica, la ciencia cognitiva, la inteligencia artificial, la recuperación de información y el aprendizaje automático.
Contiene módulos para optimización, álgebra lineal, integración, interpolación, funciones especiales, FFT, procesamiento de señales e imágenes, solucionadores de ODE y otras tareas comunes en ciencia e ingeniería.
Matplotlib& Seabornpara el trazado y la visualización de datos estadísticos
Pillow& MoviePypara procesamiento de imágenes y videos
Die Vorteile des RAD Studio Update-Abonnements werden weiter ausgebaut. Zusätzlich zu mehreren KOSTENLOSEN Komponenten und Tools, die Kunden von Update Subscription zur Verfügung stehen, möchten wir einige aufregende Funktionen für September im Zusammenhang mit den Versionen 10.4 / 10.4.1 hervorheben.
In 10.4 haben wir die VCL Styles-Architektur erheblich erweitert, um High DPI- und 4K-Monitore zu unterstützen. Alle UI-Steuerelemente im VCL-Formular werden jetzt automatisch für die richtige Auflösung des Monitors skaliert, auf dem das Formular angezeigt wird. Jedes UI-Element kann aus einer Bibliothek von Versionen mit mehreren Maßstäben ausgewählt und auf eine beliebige DPI skaliert werden, was zu gestochen scharfen UI-Elementen auf allen Monitoren führt.
Die folgenden 13 benutzerdefinierten VCL-Stile wurden aktualisiert, um High DPI mit Ihren VCL-Anwendungen vollständig zu unterstützen:
Calypso
Stellar
Wedgewood Light
Material Oxford Blue
Puerto Rico
Materialmuster Blau
Windows 10 Modern Malibu
Windows 10 Moderner Blauwal
Windows 10 Modern Clear Day
Windows 10 Modern Black Pearl
Flaches UI-Licht
Glückspunkt
Zirkon
Paralleler Debugger
Heutige Anwendungen werden nicht nur auf einem Thread ausgeführt, sondern auf den Haupt-UI-Thread und mehrere parallele Threads verteilt, die alle zusammenarbeiten. Die meisten IDEs basieren jedoch auf der Interaktion mit jeweils einem Thread beim Debuggen oder verfügen über Debugging-Steuerelemente, die nicht einmal wissen, dass möglicherweise überhaupt mehr Threads vorhanden sind.
Das Debuggen von Thread-Interaktionen kann schwierig, sehr schwierig sein … und wir haben die Lösung: eine neue parallele Erweiterung zu RAD Studio, mit der Sie verstehen und steuern können, was Ihre Multithread-Anwendung tut. Visualisieren Sie Thread-Aufrufstapel parallel. Im Code-Editor werden mehrere Threads inline angezeigt. Steuern Sie die Ausführung pro Thread. Und mehr! Diese aufregende neue Erweiterung ist in Kürze exklusiv für Kunden mit Update-Abonnement erhältlich, die RAD Studio 10.4.1 verwenden.
TwineCompile: Beschleunigen Sie C ++ – Builds
C ++ kann für alle C ++ – IDEs langsam kompiliert werden – und TwineCompile ist die Antwort. Diese erstaunliche C ++ Builder-Erweiterung parallelisiert C ++ – Builds und beschleunigt sie um die Anzahl der Kerne in Ihrem Computer. So wird Ihr Projekt in der Hälfte, einem Viertel oder sogar weniger erstellt.
Dieses Add-On ist eine der besten Produktivitätserweiterungen für die C ++ – Entwicklung. Abhängig von Ihrer Anwendungsgröße können Sie eine Stunde oder mehr pro Tag sparen – eine enorme Zeitersparnis für Ihr gesamtes Entwicklungsteam!
Es gibt drei kurze, zweiminütige Videos, in denen TwineCompile und seine Verwendung vorgestellt werden: wahrscheinlich die besten sechs Minuten, die Sie heute damit verbringen können, Dinge für Ihr C ++ – Team zu verbessern.
Sie können TwineCompile jetzt für C ++ Builder mit Ihrem Update-Abonnement erhalten – auch für Kunden der Professional Edition!
Aktualisiertes FMX Linux
Wir haben kürzlich das FMXLinux-Paket für Delphi in GetIt aktualisiert. Die neueste Version unterstützt 10.4.1 vollständig und enthält eine Reihe von Qualitätsverbesserungen.
Преимущества подписки на обновления RAD Studio продолжают расширяться. В дополнение к множеству БЕСПЛАТНЫХ компонентов и инструментов, доступных клиентам по подписке на обновления, мы хотим выделить несколько интересных функций сентября, связанных с выпусками 10.4 / 10.4.1.
Пользовательские стили Windows для VCL с поддержкой HighDPI
В версии 10.4 мы значительно расширили архитектуру стилей VCL для поддержки мониторов с высоким разрешением и разрешением 4K. Все элементы управления пользовательского интерфейса в форме VCL теперь автоматически масштабируются для надлежащего разрешения монитора, на котором отображается форма. Каждый элемент пользовательского интерфейса можно выбрать из библиотеки многомасштабных версий и масштабировать до любого DPI, в результате чего элементы пользовательского интерфейса будут четкими на всех мониторах.
Следующие 13 пользовательских стилей VCL были обновлены для полной поддержки High DPI с вашими приложениями VCL:
Калипсо
Звездный
Веджвуд Лайт
Материал Oxford Blue
Пуэрто-Рико
Материал Узоры Синий
Windows 10 Современный Малибу
Windows 10 Современный синий кит
Windows 10 Modern Clear Day
Windows 10 Modern Black Pearl
Плоский свет пользовательского интерфейса
Удачная точка
Циркон
Параллельный отладчик
Современные приложения не просто выполняются в одном потоке — они распределены по основному потоку пользовательского интерфейса и нескольким параллельным потокам, которые взаимодействуют друг с другом. Тем не менее, большинство IDE построены на взаимодействии с одним потоком за раз при отладке или имеют элементы управления отладкой, которые даже не знают, что может существовать больше потоков.
Отладка взаимодействия потоков может быть сложной, очень сложной … и у нас есть решение: новое параллельное расширение для RAD Studio, предназначенное для понимания и управления тем, что делает ваше многопоточное приложение. Визуализируйте стеки вызовов потоков параллельно. Просмотрите несколько потоков, указанных в редакторе кода. Управление выполнением для каждого потока. И больше! Это захватывающее новое расширение скоро появится исключительно для пользователей подписки на обновления, использующих RAD Studio 10.4.1.
TwineCompile: ускорение сборки на C ++
C ++ может медленно компилироваться для всех IDE C ++, и TwineCompile — это ответ. Это удивительное расширение C ++ Builder распараллеливает сборки C ++, ускоряя их за счет количества ядер на вашем компьютере, благодаря чему сборка вашего проекта занимает половину, четверть или даже меньше времени.
Это дополнение — одно из лучших расширений производительности, доступных для разработки на C ++. В зависимости от размера вашего приложения можно сэкономить час или больше в день — огромная экономия времени для всей вашей команды разработчиков!
Есть три коротких двухминутных видеоролика о TwineCompile и о том, как его использовать: вероятно, лучшие шесть минут, которые вы могли бы потратить сегодня на улучшение вещей для вашей команды C ++.
Вы можете получить TwineCompile для C ++ Builder прямо сейчас с подпиской на обновления, в том числе для пользователей версии Professional!
Обновленный FMX Linux
Мы также недавно обновили пакет FMXLinux для Delphi в GetIt. Последняя версия полностью поддерживает 10.4.1 и включает ряд улучшений качества.
Os benefícios da assinatura de atualização do RAD Studio continuam a se expandir. Além de vários componentes e ferramentas GRATUITOS disponíveis para clientes de Assinatura de Atualização, queremos destacar vários recursos interessantes para setembro relacionados às versões 10.4 / 10.4.1.
Estilos de Windows VCL prontos para HighDPI personalizados
Em 10.4, estendemos significativamente a arquitetura de estilos VCL para suportar monitores de alto DPI e 4K. Todos os controles da IU no formulário VCL agora são dimensionados automaticamente para a resolução adequada do monitor no qual o formulário é exibido. Cada elemento de IU pode ser selecionado em uma biblioteca de versões em várias escalas e escalado para qualquer DPI, resultando em elementos de IU nítidos em todos os monitores.
Os 13 estilos VCL personalizados a seguir foram atualizados para oferecer suporte total a Alta DPI com seus aplicativos VCL:
Calypso
Estelar
Wedgewood Light
Material Oxford Blue
Porto Rico
Padrões de Materiais Azul
Windows 10 Modern Malibu
Windows 10 Modern Blue Whale
Dia claro moderno do Windows 10
Windows 10 Modern Black Pearl
Flat UI Light
Lucky Point
Zircão
Depurador Paralelo
Os aplicativos de hoje não são executados apenas em um thread – eles estão espalhados pelo thread de IU principal e vários threads paralelos, todos interagindo No entanto, a maioria dos IDEs são construídos em torno da interação com um thread por vez durante a depuração, ou têm controles de depuração que nem mesmo estão cientes de que mais threads podem existir.
Depurar interações de thread pode ser difícil, realmente difícil … e nós temos a solução: uma nova extensão paralela para RAD Studio projetada para entender e controlar o que seu aplicativo multi-threaded está fazendo. Visualize pilhas de chamadas de thread em paralelo. Veja vários threads indicados em linha no editor de código. Controle a execução por thread. E mais! Esta nova extensão empolgante chegará em breve exclusivamente para clientes de Assinatura de Atualização usando RAD Studio 10.4.1.
TwineCompile: acelerar compilações C ++
C ++ pode ser lento para compilar, para todos os IDEs C ++ – e TwineCompile é a resposta. Esta incrível extensão C ++ Builder paraleliza compilações C ++, acelerando-as pelo número de núcleos em sua máquina – tornando a compilação de seu projeto pela metade, um quarto ou até menos tempo.
Este add-on é uma das melhores extensões de produtividade disponíveis para desenvolvimento C ++. Dependendo do tamanho do seu aplicativo, é possível economizar uma hora ou mais por dia – uma grande economia de tempo para toda a sua equipe de desenvolvimento!
Existem três vídeos curtos de dois minutos que apresentam TwineCompile e como usá-lo: provavelmente os melhores seis minutos que você poderia gastar hoje melhorando as coisas para sua equipe C ++.
Você pode obter TwineCompile agora para C ++ Builder com sua assinatura de atualização – incluindo para clientes da edição Professional!
FMX Linux atualizado
Também atualizamos recentemente o pacote FMXLinux para Delphi no GetIt. A versão mais recente tem suporte total para 10.4.1 e inclui uma série de melhorias de qualidade.
Los beneficios de la suscripción a la actualización de RAD Studio continúan expandiéndose. Además de los múltiples componentes y herramientas GRATUITOS disponibles para los clientes de la suscripción de actualización, queremos destacar varias características interesantes para septiembre relacionadas con las versiones 10.4 / 10.4.1.
Estilos de Windows VCL personalizados preparados para HighDPI
En la versión 10.4, ampliamos significativamente la arquitectura de estilos VCL para admitir monitores de alto DPI y 4K. Todos los controles de IU en el formulario VCL ahora se escalan automáticamente para la resolución adecuada del monitor en el que se muestra el formulario. Cada elemento de la interfaz de usuario se puede seleccionar de una biblioteca de versiones de múltiples escalas y escalar a cualquier DPI, lo que resulta en elementos de interfaz de usuario nítidos en todos los monitores.
Los siguientes 13 estilos VCL personalizados se han actualizado para admitir por completo High DPI con sus aplicaciones VCL:
Calipso
Estelar
Luz de Wedgewood
Material Azul Oxford
Puerto Rico
Patrones de material azul
Windows 10 moderno Malibu
Ballena azul moderna de Windows 10
Windows 10 Modern Clear Day
Windows 10 Modern Black Pearl
Luz de interfaz de usuario plana
Lucky Point
Circón
Depurador paralelo
Las aplicaciones actuales no solo se ejecutan en un hilo, sino que se distribuyen en el hilo principal de la interfaz de usuario y en varios hilos paralelos, todos interactuando. Sin embargo, la mayoría de los IDE se basan en la interacción con un hilo a la vez durante la depuración, o tienen controles de depuración que ni siquiera son conscientes de que podrían existir más hilos.
La depuración de interacciones de subprocesos puede ser difícil, muy difícil … y tenemos la solución: una nueva extensión paralela de RAD Studio diseñada para comprender y controlar lo que está haciendo su aplicación multiproceso. Visualice pilas de llamadas de subprocesos en paralelo. Vea varios subprocesos indicados en línea en el editor de código. Controle la ejecución por subproceso. ¡Y más! Esta nueva y emocionante extensión llegará pronto exclusivamente para los clientes de suscripción de actualización que utilizan RAD Studio 10.4.1.
TwineCompile: acelere las compilaciones de C ++
C ++ puede ser lento de compilar, para todos los IDE de C ++, y TwineCompile es la respuesta. Esta increíble extensión de C ++ Builder paraleliza las compilaciones de C ++, acelerándolas por la cantidad de núcleos en su máquina, haciendo que su proyecto se compile en la mitad, una cuarta parte o incluso menos tiempo.
Este complemento es una de las mejores extensiones de productividad disponibles para el desarrollo de C ++. Dependiendo del tamaño de su aplicación, es posible ahorrar una hora o más por día, ¡un gran ahorro de tiempo para todo su equipo de desarrollo!
Hay tres videos cortos de dos minutos que presentan TwineCompile y cómo usarlo: probablemente los mejores seis minutos que podría dedicar hoy a mejorar las cosas para su equipo de C ++.
Puede obtener TwineCompile ahora para C ++ Builder con su suscripción de actualización, ¡incluso para los clientes de la edición Professional!
FMX Linux actualizado
También actualizamos recientemente el paquete FMXLinux para Delphi en GetIt. La última versión tiene soporte completo para 10.4.1 e incluye una serie de mejoras de calidad.
RAD Studio 10.4.1 ist eine qualitätsorientierte Version, und dies gilt für die IDE! Wir haben viele Punkte angesprochen, einschließlich einiger sehr häufig angeforderter Änderungen. Lesen Sie weiter unten.
Eine „qualitätsorientierte Veröffentlichung“ bedeutet eine Veröffentlichung, bei der wir nur sehr wenige neue Funktionen einführen und 95% unserer Entwicklungsanstrengungen auf Qualität konzentrieren. 10.4.1 hatte viel Arbeit in der IDE und wird nach der Installation für Sie viel reibungsloser sein. In 10.4.1 haben wir diese Zeit jedoch auch für neue Funktionen aufgewendet und einige wirklich häufig angeforderte Elemente implementiert.
Dieser Blog-Beitrag besteht aus zwei Abschnitten: Erstens einer Änderung einer alten IDE-Funktion; Zweitens eine neue Funktion und wichtige Bereiche, auf die Sie sich vielleicht sehr gefreut haben, auf die wir uns konzentriert haben!
Der Floating Form Designer
Layouts und mehrere Monitore: aka, wann ändert die IDE Dinge?
Bemerkenswerte Qualität
Der Floating Form Designer
Seit 2003 ist die RAD Studio-IDE angedockt: Während Sie Werkzeugfenster wie Palette, Objektinspektor, Nachrichten, Uhren usw. ziehen können, um zu schweben, ist das Gesamtdesign der IDE ein integriertes Fenster. Insbesondere der Editor und der Formular-Designer sind in das Hauptfenster integriert.
Der „schwebende Formular-Designer“ ist, wenn Sie dies deaktivieren, und ermöglicht, dass das von Ihnen entworfene Formular ein Fenster unter anderen Fenstern ist. Das heißt, es ist nicht in die Haupt-IDE eingebettet, sondern ahmt das Delphi 1-obwohl-7-Verhalten nach, bei dem sich das entworfene Formular über oder hinter dem Editor befinden kann. Dieses Verhalten wurde seit siebzehn Jahren durch modernes Docked-Design ersetzt, bei dem Sie die Funktion im alten Stil manuell aktivieren mussten, und hat sich leider nicht immer gut verhalten. Bei der Bewertung des Features haben wir die schwierige Entscheidung getroffen, es zu entfernen.
Was bedeutet das? Bedeutet dies, dass Sie beispielsweise nicht mehrere Editor- oder Designerfenster haben können? Nein! Sehr viel nicht. Wenn Sie möchten, können Sie auch immer noch mehrere Editorfenster auf mehrere Monitore verteilen, wobei jedes ein bestimmtes Formular hostet. Dabei haben wir sogar eine Vielzahl von Bereichen und UX- oder Verhaltensänderungen optimiert.
Hier sehen Sie RAD Studio auf zwei Monitoren verteilt. Sie können jederzeit mit der rechten Maustaste auf eine Registerkarte klicken und „Neues Bearbeitungsfenster“ auswählen. Sobald Sie ein zweites oder drittes Bearbeitungsfenster haben, können Sie Registerkarten zwischen diese ziehen. Dies sollte ziemlich reibungslos funktionieren: Wir haben in 10.4.1 eine große Anzahl von Verhaltensproblemen bei Registerkarten und beim Ziehen von Registerkarten, beim Bearbeiten von Fenstern und beim Fokussieren behoben.Hier entwirft die IDE zwei Formulare gleichzeitig. Das Hauptfenster befindet sich auf der rechten Seite. Der Objektinspektor, der rechts angedockt ist, gibt Informationen zu den beiden Formularen wieder, an denen zuletzt gearbeitet wurde.
Zwei wirklich bemerkenswerte Punkte, die wir in diesem Bereich angesprochen haben, sind:
Die IDE funktionierte früher nicht ganz so, wie Sie es wollten, wenn Sie auf ein Element im Strukturbereich klicken: Der Strukturbereich scrollte manchmal und das falsche Element wurde ausgewählt. Dies ist jetzt behoben. Wenn Sie auf klicken, wird ausgewählt, auf was Sie geklickt haben. Ich bin wirklich froh, diesen zu bemerken.
Wenn mehrere Formulare gleichzeitig entworfen werden, spiegeln die Fenster „Struktur- und Objektinspektor“ die Auswahl für den Formular-Designer in dem Fenster wider, an das sie angedockt wurden. Jetzt spiegeln sie immer das Formular wider, das Sie bearbeiten. Das heißt, an was auch immer Sie arbeiten, es werden Informationen angezeigt, unabhängig davon, was wo angedockt ist. Der Schlüssel, der hier zu beachten ist, ist, wie viel besser 10.4.1 beim Entwerfen von Formularen über mehrere Bildschirme ist.
Dies waren „Belästigungen“, Dinge, die geringfügig erscheinen mögen, aber bei der Arbeit im Weg standen. Wir freuen uns, das bessere Verhalten in 10.4.1 zu bemerken.
Layouts und mehrere Monitore: aka, wann ändert die IDE Dinge?
Bei der Arbeit mit Layouts und dem Designer haben wir außerdem eine häufig angeforderte Funktion hinzugefügt.
Desktop-Layouts speichern die Position und den Standort Ihrer IDE-Fenster, einschließlich des Monitors, auf dem sich Ihre IDE befindet. Sie können Ihre eigenen erstellen oder eine vorhandene überschreiben – klicken Sie einfach auf das Desktop- / Mond-Symbol in der Titelleiste und speichern Sie den Desktop (wählen Sie einen neuen Namen oder einen bereits vorhandenen Namen). Die IDE wechselt automatisch zwischen den Layouts – welches sie auswählt, wenn es auf der Seite IDE-Optionen> IDE> Speichern und Desktop gesteuert werden kann – Sie können jedoch jederzeit eine auswählen, indem Sie im Kombinationsfeld auf eine klicken in der Titelleiste.
Obwohl einige Benutzer die IDE auf mehreren Monitoren verwenden, z. B. auf einem Bildschirm entwerfen und auf einem anderen codieren, ist es auch üblich, dass die IDE nur auf einem Monitor vollständig im Vollbildmodus angezeigt wird und beim Debuggen auf einen anderen Bildschirm verschoben wird. Das heißt, Sie möchten, dass Ihr Hauptmonitor bei normaler Entwicklung die IDE anzeigt und beim Debuggen auf einen anderen Monitor verschoben wird, damit sich Ihre App auf dem Hauptmonitor befindet. Dies ist möglich, indem Sie die IDE auf einen anderen Bildschirm verschieben und das Debug-Layout speichern. Jedes Mal, wenn Sie die IDE debuggen, wechselt sie zum zweiten Bildschirm. Der Schlüssel ist, dies ist jedes Mal. Manchmal möchten Sie etwas Flexibilität.
Viele Benutzer möchten Layouts für bestimmte Bildschirme nicht manuell speichern. Stattdessen möchten sie die IDE einfach verschieben und dort belassen, wo Sie sie abgelegt haben . Wenn Sie in diesem Szenario in der Vergangenheit Ihre IDE auf den zweiten Bildschirm gezogen und auf Ausführen geklickt haben und Ihr Debug-Layout nicht explizit auf dem zweiten Monitor gespeichert haben, kehrt die IDE beim Wechseln des Layouts zum Hauptbildschirm zurück. Das ist wahrscheinlich nicht das, was du willst.
In 10.4.1 haben wir Einstellungen eingeführt, um zu steuern, wie sich die IDE selbst bewegt. Auf diese Weise können Sie der IDE mitteilen, dass sie sich nicht bewegt. bleib wo ich dich hinstelle ‚oder‘ bewege dich nur unter bestimmten Umständen ‚. Die neuen Einstellungen befinden sich im Dialogfeld „Optionen“, Abschnitt „IDE> Speichern und Desktop“, „Layouts und mehrere Monitore“. Auf diese Weise können Sie auswählen, wann die IDE beim Ändern von Layouts Bildschirme verschieben kann.
Die neue Einstellung
Die Optionen sind:
Ändern des Bildschirms bei jeder Layoutänderung zulassen: Dies ist das alte Verhalten. Die IDE sieht, auf welchem Bildschirm ein Layout gespeichert wurde, und bewegt sich dorthin
Erlauben Sie nur das Umschalten des Bildschirms zum / vom Debug-Layout: Dies behebt das obige Szenario, in dem Sie die IDE möglicherweise beim Debuggen auf Ihrem zweiten Monitor haben möchten, aber nur dann. Die IDE kann nur verschoben werden, wenn das Debuggen gestartet oder gestoppt wird.
Halten Sie die IDE immer auf dem gleichen Bildschirm: Die IDE wechselt niemals die Monitore. Es wird immer dort bleiben, wo Sie es setzen.
Diese Einstellungen sollten Ihnen dabei helfen, zu steuern, wo die IDE platziert wird. Denken Sie daran, dass Sie ein Layout jederzeit über das Desktop- / Mond-Symbol speichern können Klicken Sie in der Titelleiste auf ein Layout über das Kombinationsfeld in der Titelleiste. In Kombination mit diesen neuen Einstellungen können Sie die IDE so anzeigen und lokalisieren, wo immer Sie sie benötigen, und sie so konfigurieren, dass sie immer automatisch lokalisiert und angeordnet wird.
Bemerkenswerte Qualität
In 10.4.1 gibt es über 800 Qualitätskorrekturen, und das Dokument „Neue Funktionen“ enthält eine große Liste. Dies ist nur eine Auswahl einiger Probleme, auf die Sie möglicherweise gestoßen sind und auf die besonders hingewiesen werden sollte, dass sie nicht mehr auftreten:
Pakete können jetzt ein automatisches Versionssuffix haben, anstatt bei jeder neuen Version manuell das richtige Versionssuffix anzugeben
Das Dialogfeld „Optionen“ (Umgebungsoptionen) wurde immer geöffnet, um Einstellungen für die Win64-Zielplattform anzuzeigen. Jetzt wird es entsprechend der aktuell aktiven Plattform geöffnet. Dies ist ein häufig angeforderter Fehlerbericht, den wir sehr gerne beheben.
Der Objektinspektor hat auch Änderungen an der Auswahl beim Klicken sowie am Flackern beim Zeichnen.
‚Ungültige Pfade löschen‘ in den Pfadeditoren in den Dialogfeldern „Optionen“ konnte in der Vergangenheit gültige Pfade löschen. Jetzt werden nur ungültige Pfade gelöscht.
Sie können mit dem Mausrad in den Dialogfeldern „Optionen“ scrollen
In der Projektansicht stehen einige Optionen über ein Dropdown-Menü in der Symbolleiste wieder zur Verfügung
Normalerweise werden Probleme nicht hervorgehoben, aber diese sind erwähnenswert, da sie wahrscheinlich aufgetreten sind und es sich lohnt zu wissen, dass sie in 10.4.1 behoben wurden.
Insgesamt
RAD Studio 10.4.1 ist jetzt verfügbar. Es ist eine Qualitätsversion mit einem großen Fokus auf Qualität und Verbesserungen. Neben vielen Optimierungen und Korrekturen in der IDE gibt es einige neue Funktionen rund um Layouts und Multi-Monitor, die seit einiger Zeit angefordert werden und von denen wir hoffen, dass Sie sie wirklich mögen, sowie ähnlich einige Aufmerksamkeit in Qualitätsbereichen dass wir denken, wird sehr beliebt sein.
RAD Studio 10.4.1 — это выпуск, ориентированный на качество, и это касается IDE! Мы рассмотрели множество вопросов, включая некоторые очень часто запрашиваемые изменения; подробнее читайте ниже.
«Релиз, ориентированный на качество» — это релиз, в котором мы вводим очень мало новых функций и концентрируем 95% наших усилий по разработке на качестве. В среде IDE 10.4.1 было проделано много работы, и после установки он станет для вас более плавным. Но в 10.4.1 мы также потратили это время на новые функции и реализовали пару действительно часто запрашиваемых элементов.
В этом сообщении блога есть два раздела: во-первых, изменение старой функции IDE; во-вторых, новая функция и ключевые области, на которых мы сосредоточились!
Конструктор плавающих форм
Макеты и несколько мониторов: иначе говоря, «когда среда IDE что-то меняет?»
Отличное качество
Конструктор плавающих форм
Начиная с 2003 года, RAD Studio IDE была «закреплена»: то есть, хотя вы можете перетаскивать окна инструментов, такие как палитра, инспектор объектов, сообщения, часы и т. Д., В плавающее положение, общий дизайн IDE представляет собой интегрированное окно. В частности, редактор и конструктор форм интегрированы в главное окно.
«Конструктор плавающих форм» — это когда вы выключите это, и он позволяет проектируемой форме быть окном среди других окон; то есть он не встроен в основную среду IDE, а имитирует поведение Delphi 1-хотя-7, когда разработанная форма может располагаться над или за редактором. Это поведение было заменено современным дизайном стыковки в течение семнадцати лет, требующим вручную включать функцию старого стиля, и, к сожалению, не всегда вел себя хорошо. Оценивая эту функцию, мы приняли нелегкое решение удалить ее.
Что это значит? Означает ли это, что у вас не может быть, например, нескольких окон редактора или дизайнера? Нет! Вряд ли. Фактически, у вас все еще может быть несколько окон редактора, распределенных по нескольким мониторам, если вы хотите, причем каждое из них будет содержать разработанную форму … и мы даже настроили широкий спектр областей и настроек UX или поведения, пока вы это делаете!
Здесь вы можете увидеть, как RAD Studio размещена на двух мониторах. Вы всегда можете щелкнуть вкладку правой кнопкой мыши и выбрать «новое окно редактирования», а когда у вас появится второе или третье окно редактирования, вы можете перетаскивать вкладки между ними. Это должно работать довольно гладко: в 10.4.1 мы решили большое количество проблем с поведением вкладок и перетаскивания вкладок, окон редактирования и фокусировки.Здесь IDE проектирует сразу две формы. Главное окно находится на правом экране. Инспектор объектов, который закреплен справа, будет отображать информацию для той из двух форм, над которой работали в последний раз.
В этой области мы рассмотрели два действительно важных вопроса:
Раньше IDE работала не так, как хотелось бы, при щелчке по элементу на панели «Структура»: панель «Структура» иногда прокручивалась и выбирался не тот элемент. Теперь это решено. Если вы нажмете, он выберет то, на что вы нажали. Я действительно рад отметить это.
Когда у вас одновременно разрабатывается несколько форм, окна инспектора структуры и объекта будут отражать выбор дизайнера форм в окне, к которому они были прикреплены. Теперь они всегда отражают форму, которую вы редактируете. То есть, над чем бы вы ни работали, они будут отображать информацию, независимо от того, что к чему. Здесь важно отметить, насколько лучше 10.4.1 справляется с проектированием форм на нескольких экранах.
Это были «неприятности», вещи, которые могли показаться незначительными, но мешали работе. Мы рады отметить лучшее поведение в 10.4.1.
Макеты и несколько мониторов: иначе говоря, «когда среда IDE что-то меняет?»
При работе с макетами и дизайнером мы также добавили одну часто запрашиваемую функцию.
Макеты рабочего стола сохраняют положение и расположение окон вашей IDE, включая монитор, на котором находится ваша IDE. Вы можете создать свой собственный или перезаписать существующий — просто щелкните значок рабочего стола / луны в строке заголовка и сохраните рабочий стол (выберите новое имя или уже существующее имя). IDE переключается между макетами автоматически — какой из них она выбирает, когда может управляться на странице Параметры IDE> IDE> Сохранение и рабочий стол — но вы всегда можете выбрать один в любое время, щелкнув его в поле со списком в строке заголовка.
Хотя некоторые люди используют среду IDE на нескольких мониторах, например, проектируя на одном экране и кодируя на другом, также часто бывает, что среда IDE находится в полноэкранном режиме только на одном мониторе и перемещается на другой экран при отладке. То есть вы хотите, чтобы на вашем основном мониторе отображалась среда IDE при нормальной разработке, и вы хотите, чтобы он перемещался на другой монитор при отладке, чтобы ваше приложение находилось на основном мониторе. Это можно сделать, переместив IDE на другой экран и сохранив макет отладки. Затем каждый раз при отладке IDE будет переходить на второй экран. Ключ в том, что это происходит каждый раз. Иногда вам нужна гибкость.
Многие люди не хотят вручную сохранять макеты для определенных экранов. Вместо этого они хотят просто переместить среду IDE и оставить ее там, где вы ее поставили. В этом сценарии в прошлом, если вы перетащили свою среду IDE на второй экран и нажали «Выполнить» и не сохранили явно макет отладки на втором мониторе, среда IDE вернется на главный экран при переключении макета. Скорее всего, это не то, что вам нужно.
В версии 10.4.1 мы ввели настройки для управления движением среды IDE, и это позволяет указать среде IDE «не двигаться»; оставайтесь там, где я вас поставил »или« двигайтесь только при определенных обстоятельствах ». Новые настройки находятся в диалоговом окне «Параметры», в разделе «IDE»> «Сохранение и рабочий стол», «Макеты и несколько мониторов». Это позволяет вам выбирать, когда среда IDE может перемещать экраны при изменении макетов.
Новый сеттинг
Возможные варианты:
Разрешить изменение экрана при любом изменении макета: это старое поведение; IDE увидит, на каком экране был сохранен макет, и переместится туда
Разрешить только переключение экрана на / из макета отладки: это касается описанного выше сценария, когда вы можете захотеть иметь IDE на втором мониторе во время отладки, но только тогда. Это позволяет среде IDE перемещаться только при запуске или остановке отладки.
Всегда держите IDE на одном экране: IDE никогда не меняет мониторы. Он всегда будет оставаться там, где вы его положили.
Эти настройки должны значительно помочь вам контролировать, где размещается IDE. Помните, что вы всегда можете сохранить макет через значок рабочего стола / луны в строке заголовка и выберите макет в поле со списком в строке заголовка. Это в сочетании с этими новыми настройками позволит IDE выглядеть и располагаться там, где вам нужно, и позволит вам настроить ее так, чтобы она всегда располагалась и располагалась так, как вы хотите, автоматически.
Отличное качество
В версии 10.4.1 более 800 исправлений качества, а в документе «Что нового» есть огромный список. Это лишь некоторые из проблем, с которыми вы, возможно, столкнулись, но которые больше не возникают:
Пакеты теперь могут иметь автоматический суффикс версии вместо того, чтобы вручную указывать правильный суффикс версии для каждого нового выпуска. Диалоговое окно «Параметры» (параметры среды) всегда открывалось для отображения настроек целевой платформы Win64; теперь он открывается в соответствии с текущей активной платформой. Это часто запрашиваемый отчет об ошибке, который мы очень рады исправить. Инспектор объектов также имеет настройки выделения при щелчке и мерцания при рисовании. Функция «Удалить недопустимые пути» в редакторах путей в диалоговых окнах «Параметры» раньше могла удалять действительные пути. Теперь он удаляет только недопустимые пути. Вы можете прокручивать диалоговые окна параметров с помощью колесика мыши. В представлении «Проекты» некоторые параметры снова доступны в раскрывающемся списке на панели инструментов.
Обычно мы не выделяем проблемы, но их стоит отметить, потому что вы, вероятно, столкнулись с ними, и стоит знать, что они будут решены в 10.4.1.
В общем и целом
RAD Studio 10.4.1 уже вышла. Это качественный выпуск, в котором большое внимание уделяется качеству и улучшениям. Помимо множества настроек и исправлений в среде IDE, есть некоторые новые функции, связанные с макетами и несколькими мониторами, которые были запрошены в течение некоторого времени и которые, мы надеемся, вам действительно понравятся, а также некоторое внимание в областях качества. которые, как мы думаем, будут действительно популярны.
RAD Studio 10.4.1 é uma versão focada na qualidade, e isso vale para o IDE! Abordamos muitos itens, incluindo algumas alterações comumente solicitadas; leia mais abaixo.
Uma “versão com foco na qualidade” significa aquela em que introduzimos poucos recursos novos e concentramos 95% de nossos esforços de desenvolvimento na qualidade. 10.4.1 teve muito trabalho no IDE e será muito mais suave para você após a instalação. Mas, em 10.4.1, também gastamos esse tempo em novos recursos e implementamos alguns itens realmente solicitados.
Existem duas seções nesta postagem do blog: primeiro, uma mudança para um recurso IDE antigo; em segundo lugar, um novo recurso e áreas-chave nas quais você pode estar muito feliz por termos nos concentrado!
O Designer de Formulário Flutuante
Layouts e monitores múltiplos: também conhecido como “quando o IDE muda as coisas?”
Qualidade notável
O Designer de Formulário Flutuante
Desde 2003, o RAD Studio IDE tem sido ‘encaixado’: isto é, enquanto você pode arrastar janelas de ferramentas como Paleta, Inspetor de objetos, Mensagens, Relógios e assim por diante para flutuar, o design geral do IDE é uma janela integrada. Especificamente, o editor e o designer de formulário estão integrados na janela principal.
O ‘designer de formulário flutuante’ ocorre quando você desativa isso e permite que o formulário que você está criando seja uma janela entre outras janelas; ou seja, não está embutido no IDE principal, mas simula o comportamento do Delphi 1-embora-7, onde a forma projetada pode estar acima ou atrás do editor. Esse comportamento foi substituído por um design moderno acoplado por dezessete anos, exigindo que você ative manualmente o recurso de estilo antigo e, infelizmente, nem sempre se comportou bem. Ao avaliar o recurso, tomamos a difícil decisão de removê-lo.
O que isto significa? Isso significa que você não pode ter várias janelas de editor ou designer, por exemplo? Não! Muito não. Na verdade, você ainda pode ter várias janelas de editor espalhadas por vários monitores também, se desejar, com cada uma hospedando um formulário projetado … e nós ainda ajustamos uma ampla gama de áreas e UX ou ajustes de comportamento enquanto você faz!
Aqui você pode ver o RAD Studio distribuído por dois monitores. Você sempre pode clicar com o botão direito do mouse em uma guia e selecionar “nova janela de edição” e, assim que tiver uma segunda ou terceira janela de edição, poderá arrastar as guias entre elas. Isso deve funcionar sem problemas: resolvemos um grande número de problemas de comportamento em torno de guias e arrastar guias, janelas de edição e problemas de foco em 10.4.1.Aqui, o IDE está projetando dois formulários ao mesmo tempo. A janela principal está na tela do lado direito. O Inspetor de objetos, que está encaixado à direita, refletirá as informações para qualquer um dos dois formulários que foi trabalhado mais recentemente.
Dois itens realmente notáveis que abordamos nesta área são:
O IDE costumava não funcionar exatamente como você gostaria ao clicar em um item no painel Estrutura: o painel Estrutura às vezes rolava e o item errado era selecionado. Isso agora está resolvido. Se você clicar, ele selecionará o que você clicou. Estou muito feliz em notar este.
Quando você tem vários formulários sendo projetados ao mesmo tempo, as janelas Structure e Object Inspector refletem a seleção do designer de formulário na janela em que foram encaixados. Agora, eles sempre refletem a forma que você está editando. Ou seja, o que quer que você esteja trabalhando é o que eles mostrarão informações, independentemente do que está encaixado onde. A chave a ser observada aqui é como o 10.4.1 é muito melhor em lidar com o design de formulários em várias telas.
Estes eram “aborrecimentos”, coisas que podem parecer menores, mas atrapalham durante o trabalho. Temos o prazer de observar o melhor comportamento em 10.4.1.
Layouts e monitores múltiplos: também conhecido como “quando o IDE muda as coisas?”
Ao trabalhar com layouts e o designer, também adicionamos um recurso solicitado com frequência.
Os layouts da área de trabalho salvam a posição e a localização das janelas do IDE, incluindo o monitor em que o IDE está. Você pode criar o seu próprio ou substituir um existente – basta clicar no ícone da área de trabalho / lua na barra de título e salve a área de trabalho (escolha um novo nome ou um nome pré-existente). O IDE alterna entre layouts automaticamente – qual ele escolhe quando pode ser controlado em Opções do IDE> IDE> página Salvando e área de trabalho – mas você sempre pode escolher um a qualquer momento clicando em um na caixa de combinação na barra de título.
Embora algumas pessoas usem o IDE em vários monitores, por exemplo, projetando em uma tela e codificando em outra, também é comum ter o IDE totalmente em tela inteira em apenas um monitor e movê-lo para outra tela durante a depuração. Ou seja, você deseja que seu monitor principal exiba o IDE ao fazer o desenvolvimento normal e deseja que ele seja movido para outro monitor durante a depuração, de modo que seu aplicativo fique no monitor principal. Isso é possível movendo o IDE para outra tela e salvando o layout de depuração. Então, toda vez que você depurar, o IDE passará para a segunda tela. A chave é que isso é sempre. Às vezes você quer alguma flexibilidade.
Muitas pessoas não querem salvar manualmente os layouts para telas específicas. Em vez disso, eles querem apenas mover o IDE e mantê-lo onde você o colocou. Neste cenário, no passado, se você arrastasse seu IDE para a segunda tela e clicasse em Executar, e não tivesse salvado explicitamente seu layout de depuração no segundo monitor, o IDE se moveria de volta para a tela principal conforme muda o layout. Provavelmente não é isso que você quer.
Em 10.4.1, introduzimos configurações para controlar como o IDE se move, e isso permite que você diga ao IDE ‘não se mova; fique onde eu coloquei você ‘ou’ apenas mude em circunstâncias específicas ‘. As novas configurações estão na caixa de diálogo Opções, IDE> seção Salvando e Área de Trabalho, ‘Layouts e monitores múltiplos’. Isso permite que você escolha quando o IDE pode mover as telas ao alterar os layouts.
A nova configuração
As opções são:
Permitir mudança de tela em qualquer mudança de layout: este é o comportamento antigo; o IDE verá em qual tela um layout foi salvo e moverá para lá
Permitir apenas a mudança de tela para / do Layout de depuração: isso aborda o cenário acima, onde você pode querer ter o IDE em seu segundo monitor durante a depuração, mas somente então. Ele permite que o IDE se mova apenas ao iniciar ou interromper a depuração.
Sempre mantenha o IDE na mesma tela: o IDE nunca mudará os monitores. Ele sempre ficará onde você o colocou.
Essas configurações devem ajudar muito a controlar onde o IDE é colocado. Lembre-se, você sempre pode salvar um layout por meio do ícone da área de trabalho / lua na barra de título e escolha um layout através da caixa de combinação na barra de título. Fazer isso, combinado com essas novas configurações, permitirá que você tenha a aparência e localização do IDE onde quer que você precise, além de configurá-lo para que esteja sempre localizado e disposto como você deseja automaticamente.
Qualidade notável
Existem mais de 800 correções de qualidade em 10.4.1, e o documento O que há de novo tem uma lista enorme. Esta é apenas uma seleção de alguns problemas que você pode ter encontrado e que vale a pena apontar que não ocorrem mais:
Os pacotes agora podem ter um sufixo de versão automática, em vez de especificar manualmente o sufixo de versão correto a cada novo lançamento
A caixa de diálogo Opções (opções de ambiente) costumava sempre abrir para mostrar as configurações da plataforma de destino Win64; agora ele abre de acordo com a plataforma atualmente ativa. Este é um relatório de bug solicitado com frequência, que temos o prazer de resolver.
O Object Inspector também tem ajustes em torno da seleção ao clicar, bem como piscar ao desenhar.
‘Excluir caminhos inválidos’ nos editores de caminho nas caixas de diálogo Opções poderia, no passado, excluir caminhos válidos. Agora, ele exclui apenas caminhos inválidos.
Você pode rolar nas caixas de diálogo Opções com a roda do mouse
A visão Projetos tem algumas opções disponíveis novamente através de uma lista suspensa na barra de ferramentas
Normalmente não destacamos os problemas, mas vale a pena observá-los porque são os que você provavelmente encontrou e vale a pena saber que foram resolvidos em 10.4.1.
No geral
RAD Studio 10.4.1 já foi lançado. É um lançamento de qualidade, com grande foco em qualidade e melhorias. Assim como muitos ajustes e correções no IDE, existem alguns novos recursos em torno de layouts e multi-monitor que foram solicitados há algum tempo e que esperamos que você goste de ter, bem como alguma atenção similar nas áreas de qualidade que achamos que será muito popular.
RAD Studio 10.4.1 es una versión centrada en la calidad, ¡y esto se aplica al IDE! Hemos abordado muchos elementos, incluidos algunos cambios solicitados con mucha frecuencia; lea más abajo.
Una “versión centrada en la calidad” significa una en la que presentamos muy pocas funciones nuevas y centramos el 95% de nuestros esfuerzos de desarrollo en la calidad. 10.4.1 ha tenido mucho trabajo en el IDE y será mucho más sencillo para usted una vez que lo instale. Pero, en 10.4.1 también hemos dedicado ese tiempo a nuevas funciones e implementamos un par de elementos muy solicitados.
Hay dos secciones en esta publicación de blog: primero, un cambio a una característica IDE antigua; en segundo lugar, una nueva función y áreas clave en las que puede estar muy feliz de que nos hayamos centrado.
El diseñador de formularios flotantes
Diseños y varios monitores: también conocido como “¿cuándo cambia el IDE las cosas?”
Calidad notable
El diseñador de formularios flotantes
Desde 2003, el IDE de RAD Studio ha estado “acoplado”: es decir, si bien puede arrastrar ventanas de herramientas como Paleta, Inspector de objetos, Mensajes, Relojes, etc. para flotar, el diseño general del IDE es una ventana integrada. Específicamente, el editor y el diseñador de formularios están integrados en la ventana principal.
El “diseñador de formularios flotantes” es cuando desactiva esta opción y permite que el formulario que está diseñando sea una ventana entre otras ventanas; es decir, no está integrado en el IDE principal, pero imita el comportamiento de Delphi 1-a-7 donde el formulario diseñado puede estar por encima o por detrás del editor. Este comportamiento ha sido reemplazado por un diseño acoplado moderno durante diecisiete años, lo que requiere que active manualmente la función de estilo antiguo y, lamentablemente, no siempre se comportó bien. Al evaluar la función, tomamos la difícil decisión de eliminarla.
¿Qué significa esto? ¿Significa que no puede tener varias ventanas de editor o diseñador, por ejemplo? ¡No! Mucho no. De hecho, también puede tener varias ventanas de editor repartidas en varios monitores si lo desea, y cada una de ellas aloja un formulario diseñado … ¡e incluso hemos modificado una amplia gama de áreas y ajustes de UX o de comportamiento mientras lo hace!
Aquí puede ver RAD Studio distribuido en dos monitores. Siempre puede hacer clic con el botón derecho en una pestaña y seleccionar “nueva ventana de edición”, y una vez que tenga una segunda o tercera ventana de edición, puede arrastrar las pestañas entre ellas. Esto debería funcionar bastante bien: hemos resuelto una gran cantidad de problemas de comportamiento alrededor de pestañas y arrastrar pestañas, editar ventanas y problemas de enfoque en 10.4.1.Aquí, el IDE está diseñando dos formularios a la vez. La ventana principal está en la pantalla de la derecha. El Inspector de objetos, que está acoplado a la derecha, reflejará la información de cualquiera de las dos formas en las que se haya trabajado más recientemente.
Dos elementos realmente notables que hemos abordado en esta área son:
El IDE solía no funcionar como desearía al hacer clic en un elemento en el panel Estructura: el panel Estructura a veces se desplazaba y se seleccionaba el elemento incorrecto. Esto ahora está resuelto. Si hace clic, seleccionará en qué hizo clic. Estoy muy contento de notar este.
Cuando se están diseñando varios formularios a la vez, las ventanas del inspector de estructura y objetos reflejarían la selección del diseñador de formularios en la ventana a la que estaban acoplados. Ahora siempre reflejan el formulario que está editando. Es decir, sea lo que sea en lo que esté trabajando es para lo que mostrarán la información, independientemente de lo que esté atracado y dónde. La clave a tener en cuenta aquí es cuánto mejor es 10.4.1 en el manejo del diseño de formularios en múltiples pantallas.
Eran “molestias”, cosas que pueden parecer menores pero que se interponen en el camino al trabajar. Nos complace observar el mejor comportamiento en 10.4.1.
Diseños y varios monitores: también conocido como “¿cuándo cambia el IDE las cosas?”
Cuando trabajamos con diseños y el diseñador, también agregamos una característica solicitada con frecuencia.
Los diseños de escritorio guardan la posición y ubicación de las ventanas de su IDE, incluido el monitor en el que se encuentra su IDE. Puede crear el suyo propio o sobrescribir uno existente, simplemente haga clic en el icono de escritorio / luna en la barra de título y guarde el escritorio (elija un nombre nuevo o un nombre preexistente). El IDE cambia automáticamente entre diseños, cuál elige cuando se puede controlar en la página Opciones de IDE> IDE> Guardar y escritorio. pero siempre puede elegir uno en cualquier momento haciendo clic en uno en el cuadro combinado en la barra de título.
Aunque algunas personas usan el IDE en varios monitores, por ejemplo, diseñando en una pantalla y codificando en otra, también es común tener el IDE completamente en pantalla completa en un solo monitor y moverlo a otra pantalla al depurar. Es decir, desea que su monitor principal muestre el IDE cuando realiza un desarrollo normal y desea que se mueva a otro monitor cuando se depura, de modo que su aplicación se encuentre en el monitor principal. Esto es posible moviendo el IDE a otra pantalla y guardando el diseño de depuración. Luego, cada vez que depure, el IDE pasará a la segunda pantalla. La clave es que esto es todo el tiempo. A veces quieres algo de flexibilidad.
Mucha gente no quiere guardar diseños manualmente para pantallas específicas. En cambio, solo quieren mover el IDE y que se quede donde lo puso. En este escenario, en el pasado, si arrastraba su IDE a la segunda pantalla y hacía clic en Ejecutar, y no había guardado explícitamente su diseño de depuración en el segundo monitor, el IDE volvería a la pantalla principal cuando cambia de diseño. Es probable que eso no sea lo que quieres.
En 10.4.1, hemos introducido configuraciones para controlar cómo se mueve el IDE, y esto le permite decirle al IDE “no se mueva; quédate donde te puse “o” solo muévete en circunstancias específicas “. La nueva configuración se encuentra en el cuadro de diálogo Opciones, IDE> sección Guardar y escritorio, “Diseños y varios monitores”. Esto le permite elegir cuándo el IDE puede mover pantallas al cambiar de diseño.
El nuevo escenario
Las opciones son:
Permitir cambiar la pantalla en cualquier cambio de diseño: este es el comportamiento anterior; el IDE verá en qué pantalla se guardó un diseño y lo moverá allí
Solo permite cambiar la pantalla hacia / desde el diseño de depuración: esto aborda el escenario anterior en el que es posible que desee tener el IDE en su segundo monitor al depurar, pero solo entonces. Permite que el IDE se mueva solo al iniciar o detener la depuración.
Mantenga siempre el IDE en la misma pantalla: el IDE nunca cambiará de monitor. Siempre se quedará donde lo pones.
Esta configuración debería ser de gran ayuda para controlar dónde se coloca el IDE. Recuerde, siempre puede guardar un diseño a través del icono de escritorio / luna en la barra de título y elija un diseño a través del cuadro combinado en la barra de título. Hacerlo, combinado con estas nuevas configuraciones, le permitirá que el IDE se vea y se ubique donde lo necesite, y le permitirá configurarlo para que siempre esté ubicado y presentado como desee automáticamente.
Calidad notable
Hay más de 800 correcciones de calidad en 10.4.1 y el documento Novedades tiene una lista enorme. Esta es solo una selección de algunos problemas que puede haber encontrado y que vale la pena señalar que ya no ocurren:
Los paquetes ahora pueden tener un sufijo de versión automático, en lugar de especificar manualmente el sufijo de versión correcto con cada nueva versión.
El cuadro de diálogo Opciones (opciones de entorno) solía abrirse siempre para mostrar la configuración de la plataforma de destino Win64; ahora se abre según la plataforma actualmente activa. Este es un informe de error que se solicita con frecuencia, y estamos encantados de resolverlo.
El Inspector de objetos también tiene ajustes alrededor de la selección al hacer clic, así como parpadear al dibujar. “Eliminar rutas no válidas” en los editores de ruta en los cuadros de diálogo Opciones podía, en el pasado, eliminar rutas válidas. Ahora, solo elimina las rutas no válidas.
Puede desplazarse por los cuadros de diálogo de Opciones con la rueda del mouse
La vista Proyectos tiene algunas opciones disponibles nuevamente a través de un menú desplegable en la barra de herramientas
Normalmente no destacamos los problemas, pero vale la pena señalarlos porque es probable que haya encontrado y vale la pena saber que se resuelven en 10.4.1.
En general
RAD Studio 10.4.1 ya está disponible. Es un lanzamiento de calidad, con un gran enfoque en la calidad y las mejoras. Además de muchos ajustes y correcciones en el IDE, hay algunas características nuevas en cuanto a diseños y monitores múltiples que se han solicitado durante algún tiempo y que esperamos que realmente le gusten, así como también algo de atención en áreas de calidad. que creemos que será muy popular.
Este verano y el año entero son realmente extraños. Nuestras vidas han cambiado de muchas maneras, y muchos de estos cambios están destinados a continuar. La importancia de la tecnología y la necesidad de crear rápidamente soluciones fiables es creciente. Embarcadero está teniendo éxito gracias a la solidez de sus productos y su remarcable comunidad. Tenemos mucho que hacer este verano. ¡Sigamos construyendo juntos!
Actualizaciones de calidad en 10.4
Los grandes lanzamientos tienen muchas dependencias y, a pesar de las rigurosas pruebas, 10.4 ha tenido su cuota de problemas de calidad. Hemos lanzado varios parches que resuelven los principales problemas. La versión 10.4.1, que saldrá en septiembre, proporcionará más mejoras de calidad y pequeñas ampliaciones. Los parches son ahora mucho más visibles gracias a las mejoras en GetIt.
Programa de actualización de 10.4
Hace más de dos años interrumpimos nuestro SKU de actualización. Con el 10.4, las solicitudes de actualización han aumentado rápidamente y muchos clientes están frustrados porque no hay un camino de actualización más barato. Así que hemos reinstaurado nuestro programa de actualización, permitiendo a los clientes actualizar las versiones anteriores a un precio de descuento. La fecha de vencimiento de este programa es el 25 de septiembre. Habrá comunicaciones por correo electrónico con detalles, pero por favor contacte a su revendedor o al Gerente de Cuentas de Embarcadero para recibir rápidamente su oferta.
Packs de bonos de verano y promociones
Para hacer el 10.4 aún más atractivo, hemos trabajado con muchos de nuestros socios tecnológicos para crear un pack de Enterprise atractivo y único. The combined value of packages included in this pack is over $13,000. In addition, we continue to enhance our GetIt packages for ALL update subscription customers, which contain over $1,000 of free software, including IDE components, Styles, FMX Linux, connectivity components, and more. We plan to keep upgrading the Upgrade Packs throughout the summer, which now is more easily managed through GetIt, so keep checking for additional benefits. These will be automatically available for eligible purchases throughout the summer.
RAD Studio is the best platform for creating native Windows applications. We believe that the desktop space has been unfairly overlooked over the years and has a lot to offer. The huge trend toward the web has in some ways oversimplified UX experiences due to limitations of the browser. The type of mobile-first applications further contributed to oversimplification. Yet native desktop continues to provide many advantages when it comes to more complex and high-performance applications. This is obvious in gaming, but new trends in digitization may push desktop performance advantages and new UX needs to the forefront. Working from home boosted the use of collaborative apps, and any more complex apps are indeed native. The explosion of AI will also increase the number of simultaneous signals that will need to be presented to a UX, and browser interfaces will continue to be very limited in their ability to handle these.
We see an opportunity for a resurgence in desktop usage. Cross-platform will continue to be key, but maybe the use cases will start with the best medium, and supplemental experiences, such as mobile, will be treated as such. Trying to build an ERP or a trading app while thinking mobile-first could be grossly undeserving of productivity and constrain the imagination regarding what could be possible. You work with IDEs all the time, and while new tools such as Visual Code are advancing, a true high-performance Web IDE is still highly impractical. We feel that it is appropriate for us to spearhead a Desktop UX Summit that could continue to expand and include many more partners and companies and drive new thinking and innovation.
We have increased our commitment to open source projects, where it makes sense. I want to highlight a couple that we are sponsoring this summer. We have created a new branch of Dev C++, which is one of the most popular editors for C++ and is built with Delphi. Our MVP Eli M. has been leading this effort as a great example that uses 10.4 to modernize a “legacy” application. This summer we will also open source the code base of Bold, which is a sort of low code solution for RAD Studio. We have a passionate group of Delphi community experts who will take that effort forward. Finally, we are collaborating with the creator of the popular Python editor PyScripter, Kiriakos Vlahos, to bring access to popular Python libraries to RAD Studio. I am especially excited about this effort, as Python is a natural complement to Delphi. One of the key advantages of Python is the plethora of libraries, especially around analytics and data management, and we are making it easier for our customers to use them!
C++Builder
We just posted a C++ road map and discussion of C++Builder directions and features. Normally we simply post a list of features we plan, but this time we discuss our strategy and both what we’ve done recently and our future plans in light of that strategy, which we believe will be helpful to you. We also discuss the results of answers given by you, our customers, to a key survey and what we’re doing in response. There’s a lot here about Windows and quality.
Other Updates
We have a number of other important updates, including:
Waiving Penalties on Late Renewals due to COVID-19: We understand that COVID-19 has created a lot of hardship for individuals and businesses. Our Renewals Team is running a number of programs to make it easier for expired customers to get back on maintenance plans. Please contact our resellers or Embarcadero renewals directly.
Update Blog Platform: Less than two years ago we changed our blogging platform to standardize with other Idera-owned brands. We feel that the uniqueness in our audience and richness of content is not well supported by this platform, and we plan to change again to a more user-friendly one with better multilanguage support and more flexible authoring capabilities.
Discontinue Proprietary Forum: Our approach to forums is outdated. We feel that our community will be better served by open third-party forums, either dedicated ones, such as Delphi PRAXiS, or general ones, such as Stack Overflow. This creates much wider visibility and access to community support. Embarcadero official support will be provided through our Support Portal for Update Subscription customers.
Maintenance of Standalone FireDAC Discontinued: The SKU has been merged with our Enterprise edition. Please connect with our Sales Team to discuss options to upgrade your Professional maintenance and FireDAC maintenance. (Update: This is in reference to the C/S Add-on Pack that was previously available for Professional licenses.)
Nossos esforços em focar na garantia de qualidade e correções de bugs para C++Builder nunca foram tão claros quanto no 10.4.1. Enquanto apreciamos sua paciência, não tomamos isso como algo trivial. Nunca estivemos tão energizados para criar sobre as bases sólidas do C++Builder, e continuaremos este impulso em lançamentos posteriores ao longo do ano.
Alguns destaques deste release:
O depurador Win64, baseado no LLDB, teve algumas melhorias importantes de qualidade e de recursos. Por exemplo, ele melhorou muito o desempenho de aplicativos com centenas de threads; melhorias no manuseio de exceções, especialmente exceções do SO; lida com alterações de memória em variáveis complexas (por exemplo, se o item de um ponteiro mudar, isso será refletido no IDE); e muitas outras correções em uma variedade de áreas, além de ganhar um novo formatter (visualizador) para unique_ptr.
O linker Win64 (ilink64) tem uma série de melhorias em seu manuseio de memória, o que deve ajudar os clientes que encontram problemas de memória, especialmente com builds de depuração
Correções de qualidade importantes em toda a cadeia de ferramentas, que vão desde Midas até o manuseio de exceções, RTTI e a estabilidade em geral.
Nosso objetivo é devolver o C++Builder a um IDE estável e eficiente. Uma vez que estejamos confortáveis com essa fundação, voltaremos nossa atenção para coisas maiores e melhores. Esperamos atualizar a conclusão do código e substituir o linker Win64 inteiramente ao longo do próximo ano, o que fornecerá uma produtividade muito melhor no IDE, além de ajudá-lo a compilar projetos muito grandes. Fique de olho em mais notícias à medida que o 10.4.2 vem à tona.
Status na Integração do Visual Assist no RAD Studio
Em nosso roteiro está a integração do Visual Assist no C++Builder. Estamos focando em suas principais características primeiro, como conclusão de código, encontrar referências, navegação e refatorações, como candidatos para o primeiro lançamento. Isso está em andamento. O analisador C++ do Visual Assist atualmente entende nossas extensões C++ (propriedades, closures, etc) e estamos pesquisando várias abordagens para a integração da IDE. Para saber mais sobre o Visual Assist, dê uma olhada em https://www.wholetomato.com/features. Faça alguns testes no Visual Assist e se houver recursos que você gostaria que incluíssemos para o C++ Builder, envie-nos uma solicitação.
Bibliotecas C++
Nosso trabalho aumentando a compatibilidade do C++Builder está em andamento, e estamos vendo resultados muito bons. Você deve se lembrar de um post anterior no blog que estamos tomando bibliotecas C++ de código aberto compatíveis, e garantindo que elas trabalhem com o C++ Builder (vários novos estarão no GetIt em breve). Isso não só significa que você tem bibliotecas úteis comuns mais facilmente disponíveis para você, mas isso significa que você está mais propenso a ser capaz de carregar facilmente qualquer biblioteca C++ que você deseja usar.
Esses esforços deram frutos: não só temos várias bibliotecas no GetIt, e outras mais em breve, mas o trabalho a ser feito para usar uma biblioteca no C++Builder mudou. Hoje em dia, geralmente é simples, principalmente manuseando macros (ifdef-s) escritas para MSVC ou GCC para também reconhecer o Embarcadero ou empacotar o código corretamente. A grande maioria do RTL ou outros métodos estão disponíveis e bibliotecas podem ser bem utilizadas facilmente. Muitas vezes, uma biblioteca compila imediatamente. Se você tem uma biblioteca que está interessado, sugerimos experimentá-la com C++Builder 10.4.1: pode haver pequenas modificações a serem realizadas, mas a compatibilidade em geral deve estar muito melhor.
Desktop UX Summit
Na última década, o design de aplicativos tem sido fortemente focado em aplicativos móveis ou web, e o design web influenciou fortemente o design de aplicativos – muitas vezes em seu detrimento. Um desktop ou aplicativo móvel não é um site.
Este ano é o primeiro Desktop UX Summit – uma conferência online gratuita sobre design de aplicativos para desktop, com uma grande variedade de apresentadores muitas vezes não conectados ou usando tecnologias Embarcadero. Queremos conscientizar os desenvolvedores sobre o design de aplicativos de desktop em geral, não apenas para nossos próprios clientes. Tem ótimas sessões, e é grátis! Então marque seus calendários para os dias 16 e 17 de setembro e visite https://summit.desktopfirst.com para se inscrever!
Nova Ferramenta Gratuita: Dev C++
Adicionalmente, como estamos revigorados para produzir ferramentas de qualidade para o desenvolvimento C++, gostaríamos de apresentá-lo ao nosso mais recente Editor de Texto leve e portátil de código aberto, Embarcadero Dev-C++.
Embarcadero Dev-C++ é um novo e melhorado fork do Bloodshed Dev-C++ e Orwell Dev-C++. É um IDE completo e editor de código para a linguagem de programação C/C++. Ele usa o port MinGW do GCC (GNU Compiler Collection) como seu compilador. Embarcadero Dev-C++ também pode ser usado em combinação com Cygwin ou qualquer outro compilador baseado em GCC. Nós fomos capazes de empacotar isso com uma pegada de memória muito baixa porque é um aplicativo nativo do Windows e não usa Electron. Para completar, todo o trabalho de atualização deste fork foi feito usando a versão mais recente de Embarcadero Delphi. Para baixar esta e outras ferramentas gratuitas vá para https://www.embarcadero.com/free-tools/dev-cpp
C++ Novidades ao Redor do Mundo
Finalmente, um resumo de notícias recentes sobre C++ e blog posts!
O MeetingC++, uma das melhores conferências C++, será online este ano. Acontecendo no fuso horário da Europa Central, os ingressos antecipados custam €49.
A reunião anual da LLVM (Clang, LLDB) também será online este ano. Os ingressos são gratuitos, embora você também possa comprar um ingresso pago para suportar o time.
‘The problem with C’: um post realmente interessante por cor3ntin sobre como as línguas divergem e o que a compatibilidade C significa para C++.
David I escreveu um ótimo post no blog mostrando using some Boost classes with C++Builder. (Uma versão recente do Boost está no GetIt). Notavelmente, ele mostra o circular buffer class. Boost é cheio de ferramentas úteis e é ótimo ver algumas delas destacadas.
A Adecc Systemshaus escreveu um blog C++.. Existem alguns ótimos posts particularmente sobre o uso de fluxos C++ padrão, como streams C++ com um TListView.
Incredibuild, um ótimo sistema de compilação para distribuição de builds C++ entre máquinas, tem uma enquete sobre o seu IDE C++ favorito e no momento deste artigo, Visual Studio, C++Builder e ‘Other’ estão empatados em cerca de 30% cada.
Важная новость для владельцев действующих (активных) подписок на обновления и поддержку. На прошлой неделе было несколько обращений от пользователей с просьбой помочь получить доступ к службе поддержки, поскольку попасть в портал привычными методами не получалось. Embarcadero в данный момент активно обновляет все пользовательские ресурсы, чтобы сделать их более наглядными, удобными и быстрыми. Новые порталы открываются по очереди, а старые сайты, хотя и остаются доступными, в скором времени будут остановлены и закрыты.
Отправлять и получать комментарии в службу технической поддержки
Этот новый портал открыт для клиентов с активными контрактами на техническое обслуживание. Некоторые такие клиенты уже получили новые учетные данные для входа на портал. Если вы случайно не получили информацию о новом аккаунте по электронной почте, не стоит беспокоиться. Страница входа на новый портал содержит ссылку для регистрации вашей учетной записи.
После подтверждения вы получите электронное письмо с информацией о вашем новом аккаунте. С учетом удобства наших клиентов, вы можете получить доступ к порталу обычным путем, посетив нашу страницу поддержки. В наших планах на следующий квартал расширить новый портал поддержки клиентов до семейства продуктов IDERA ER/Studio и DB PowerStudio!
И краткое напоминание о новом портале my.embarcadero.com, где вы сможете получить доступ ко всем своим зарегистрированным продуктам, серийным номерам, сетевым лицензиям и файлам для скачивания.
Мы рады объявить об обновлении портала поддержки клиентов для продуктов Embarcadero Development Tools. Эта фаза запуска портала включает менее сложный и более удобный интерфейс, который позволяет:
Отправить новые заявки
Просмотреть существующие кейсы
Добавить вложения
Проверить статус дела
Отправляйте и получайте комментарии в Службе технической поддержки
Этот новый портал открыт для клиентов с действующими контрактами на техническое обслуживание. Некоторые активные клиенты уже получили новые учетные данные для входа на портал. Если вы случайно не получили информацию о новом аккаунте по электронной почте, не о чем беспокоиться. Страница входа на новый портал содержит ссылку для регистрации вашей учетной записи.
После проверки вы получите электронное письмо с вашей новой учетной записью. Помня об удобстве наших клиентов, вы можете получить доступ к порталу так же, как и всегда, посетив нашу страницу поддержки. В наших планах на следующий квартал — расширить новый Портал поддержки клиентов до семейств продуктов IDERA ER / Studio и DB PowerStudio!
И быстрое напоминание о новом my.embarcadero.com, где вы можете получить доступ ко всем вашим зарегистрированным продуктам, серийным номерам, сетевым лицензиям и загрузкам.
Temos o prazer de anunciar um Portal de Suporte ao Cliente atualizado para os produtos das Ferramentas de Desenvolvimento da Embarcadero. Esta fase de lançamento do portal inclui uma interface menos complexa e visualmente mais confortável que permite:
Envie novos casos
Ver casos existentes
Adicionar Anexos
Verifique o status do caso
Envie e receba comentários com o Suporte Técnico
Este novo portal está aberto a clientes com contratos de manutenção em vigor. Alguns clientes ativos já receberam novas credenciais de login para o portal. Se por acaso você não recebeu as informações da nova conta por e-mail, não se preocupe. A página de login do novo portal contém um link para o registro de sua conta.
Depois de verificado, você receberá um e-mail com sua nova conta de login. Pensando na conveniência do cliente, você pode acessar o portal da mesma forma que sempre acessou nossa página de suporte. Nossos planos para o próximo trimestre são expandir o novo Portal de Suporte ao Cliente para a família de produtos IDERA ER / Studio e DB PowerStudio!
E um rápido lembrete sobre o novo my.embarcadero.com, onde você pode acessar todos os seus produtos registrados, números de série, licenças de rede e downloads.
Wir freuen uns, Ihnen ein aktualisiertes Kundensupportportal für die Embarcadero Development Tools-Produkte vorstellen zu können. Diese Phase des Portalstarts umfasst eine weniger komplexe und visuell ansprechendere Oberfläche, mit der Sie:
Neue Fälle einreichen
Vorhandene Fälle anzeigen
Anhänge hinzufügen
Überprüfen Sie den Fallstatus
Senden und empfangen Sie Kommentare mit dem technischen Support
Dieses neue Portal steht Kunden mit aktuellen Wartungsverträgen offen. Einige aktive Kunden haben bereits neue Anmeldeinformationen für das Portal erhalten. Wenn Sie die neuen Kontoinformationen zufällig nicht per E-Mail erhalten haben, müssen Sie sich keine Sorgen machen. Die Anmeldeseite des neuen Portals enthält einen Link zur Registrierung für Ihr Konto .
Nach der Bestätigung erhalten Sie eine E-Mail mit Ihrem neuen Login-Konto. Mit Blick auf die Bequemlichkeit unserer Kunden können Sie auf die gleiche Weise wie immer auf das Portal zugreifen, indem Sie unsere Support-Seite besuchen . Für das nächste Quartal planen wir, das neue Kundensupportportal auf die Produktfamilie IDERA ER / Studio und DB PowerStudio auszudehnen!
Und eine kurze Erinnerung an das neue my.embarcadero.com, auf dem Sie auf alle registrierten Produkte, Seriennummern, Netzwerklizenzen und Downloads zugreifen können.
Nos complace anunciar un portal de asistencia al cliente actualizado para los productos Embarcadero Development Tools. Esta fase del lanzamiento del portal incluye una interfaz menos compleja y más adaptada visualmente que le permite:
Presentar nuevos casos
Ver casos existentes
Agregar archivos adjuntos
Verificar el estado del caso
Envíe y reciba comentarios con Soporte técnico
Este nuevo portal está abierto a clientes con contratos de mantenimiento vigentes. Algunos clientes activos ya han recibido nuevas credenciales de inicio de sesión para el portal. Si por casualidad no recibió la información de la nueva cuenta por correo electrónico, no se preocupe. La página de inicio de sesión del nuevo portal contiene un enlace para registrarse en su cuenta.
Una vez verificado, recibirá un correo electrónico con su nueva cuenta de inicio de sesión. Teniendo en cuenta la conveniencia de nuestro cliente, puede acceder al portal de la misma manera que siempre lo ha hecho visitando nuestra página de soporte. ¡Nuestros planes para el próximo trimestre son expandir el nuevo portal de soporte al cliente a la familia de productos IDERA ER / Studio y DB PowerStudio!
Y un breve recordatorio sobre el nuevo my.embarcadero.com, donde puede acceder a todos sus productos registrados, números de serie, licencias de red y descargas.
Сентябрь — это не только начало занятий в школах. Осенью начинают проходить различные интересные профессиональные форумы и семинары, на которых встречаются хорошо отдохнувшие, полные свежих идей и соскучившиеся по взаимному общению ведущие концептуалы и визионеры, исследователи и практики, архитекторы и менеджеры ИТ-проектов из множества компаний — как небольших, так и самых крупных.
Одно из интереснейших событий этой осени пройдет уже очень скоро — 16 и 17 сентября 2020. Embarcadero’s и RAD Studio 10.4 Sydney являются спонсорами конференции. Это отличный шанс для любого разработчика приложений — начинающего или опытного — изучить опыт лучших мировых экспертов в области проектирования и разработки пользовательских интерфейсов и передовых решений по взаимодействию приложений с пользователями, чтобы применять его на практике в своих проектах и разработках. Ваши приложения могут и должны выглядеть первоклассно на любом устройстве и использовать все преимущества самой мощной и распространенной платформы. Настольные приложения — это не только Windows, но и популярные в нашей стране различные варианты Linux и весьма затребованный в некоторых компаниях MacOS. Вы узнаете как увеличить пользу и эффективность своих приложений от применения их сразу на мобильных и настольных системах, как предоставить пользователю самый удобный UI и UX на каждой из них.
Уже известно об участии в этом форуме более 20 известных экспертов и представителей ведущих компаний из Европы, Америки и пост-советского пространства, включая ведущих технических партнеров Embarcadero. Вы также встретитесь с многими незнакомыми пока именами, со своим особым и нестандартным подходом, и сможете освежить свои представления о возможностях интерфейсов приложений.
Участие — абсолютно бесплатное. Форум проходит онлайн, причем вы сможете увидеть интересующие вас выступления в удобное для вас время. А на сессиях вопросов и ответов, которые пройдут «вживую», можно выяснить больше деталей, которые вас интересуют.
Если вы заинтересовались и хотите принять участие, нужно только зарегистрироваться. После регистрации поделитесь с друзьями информацией о предстоящем Desktop First UX Summit в социальных сетях, используя хеш-тег #DesktopFirst и картинку ниже!
Создавайте в реальном времени веб-приложения с поддержкой WebSockets. IPWorks WebSockets включает набор мощных компонентов для интеграции коммуникационных возможностей WebSocket в веб-приложения, настольные и мобильные приложения. Компоненты идеально подходят для создания приложений, подключенных к Интернету, которым требуются данные в реальном времени, включая чат, многопользовательские игры, финансовые приложения в реальном времени и многое другое!
Компоненты интеграции WebSockets
IPWorks WebSockets включает набор мощных компонентов для интеграции коммуникационных возможностей WebSocket в веб-приложения, настольные и мобильные приложения. Компоненты идеально подходят для создания приложений, подключенных к Интернету, которым требуются данные в реальном времени, включая чат, многопользовательские игры, финансовые приложения в реальном времени и т. Д.
Сервер WebSocket и прокси
Мощные серверные и прокси-компоненты для интеграции интерактивных веб-коммуникаций.
Устранение длительного опроса
WebSockets позволяет серверу передавать данные подключенным клиентам в режиме реального времени, устраняя необходимость в ненадежных методах длительного опроса.
Единый и расширяемый дизайн
Очень простой в использовании, с унифицированным, интуитивно понятным и расширяемым дизайном. Общие интерфейсы компонентов для разных платформ и технологий.
Полностью интегрированные компоненты
Компоненты собственного программного обеспечения для любой поддерживаемой технологии разработки — без зависимости от внешних библиотек.
Сверхбыстрая производительность
На основе оптимизированной архитектуры асинхронных сокетов, которая активно совершенствовалась более двух десятилетий.
Отличная техническая поддержка
При поддержке опытной команды профессионалов службы поддержки. Неограниченная бесплатная поддержка по электронной почте или платные варианты поддержки Premium.
Особенности продукта
Предоставляет стандартную основу для постоянной двунаправленной связи.
Поддержка соединений WebSocket (ws: //) и WebSocket Secure (wss: //) с надежным шифрованием SSL до 256 бит и цифровыми сертификатами.
Аутентифицировать и зашифровать / расшифровать отправленные и полученные данные с использованием TLS 1.3, TLS 1.2, 1.1 или 1.0.
Расширенные возможности цифровых сертификатов позволяют создавать, подписывать цифровые сертификаты X.509 и управлять ими. Станьте своим собственным центром сертификации.
Полная унифицированная структура с общей, простой в освоении объектной моделью и упрощенными интерфейсами, которые помогут вам сделать больше.
Компоненты являются потокобезопасными для критических элементов.
Быстрый, прочный, надежный — компоненты потребляют минимум ресурсов.
Собственные компоненты разработки для всех поддерживаемых платформ и компонентных технологий.
Тщательно протестированные, прочные компоненты, которые прошли сотни тысяч часов тестирования как внутри нашей группы контроля качества, так и за пределами компании, установив их у клиентов.
Подробная справочная документация, примеры приложений, полностью проиндексированные файлы справки и обширная онлайн-база знаний.
При поддержке многоуровневой профессиональной поддержки, включая бесплатную поддержку по электронной почте и платную поддержку на уровне предприятия.
Собственные компоненты Delphi VCL без внешних зависимостей. Он содержит те же надежные компоненты, что и другие редакции, доступные как собственные компоненты Delphi VCL для реальной производительности Delphi.
Компонент WebSocketServer используется для создания сервера WebSocket
Нравится то, что вы видите? Библиотека IPWorks WebSockets / n Software и сотни других компонентов включены в наш пакет Enterprise Component Pack. В течение ограниченного времени при покупке RAD Studio Enterprise или Architect Edition по специальной цене на обновление вы также получите этот пакет стороннего программного обеспечения стоимостью более 13 000 долларов США БЕЗ ДОПОЛНИТЕЛЬНОЙ ЦЕНЫ! Перейдите на RAD Studio 10.4.1 сегодня!
Cree aplicaciones en tiempo real conectadas a la web con soporte para WebSockets. IPWorks WebSockets incluye un conjunto de potentes componentes para integrar las capacidades de comunicación de WebSocket en aplicaciones web, de escritorio y móviles. Los componentes son perfectos para crear aplicaciones conectadas a la Web que requieren datos en tiempo real, incluidos chat, juegos para varios jugadores, aplicaciones financieras en vivo y más.
Componentes de integración de WebSockets
IPWorks WebSockets incluye un conjunto de potentes componentes para integrar las capacidades de comunicación de WebSocket en aplicaciones web, de escritorio y móviles. Los componentes son perfectos para crear aplicaciones conectadas a la web que requieren datos en tiempo real, incluidos chat, juegos para varios jugadores, aplicaciones financieras en vivo y más.
Servidor y proxy WebSocket
Potentes componentes de servidor y proxy para integrar comunicaciones web en vivo.
Elimina el sondeo prolongado
Los WebSockets permiten que el servidor envíe datos a los clientes conectados en tiempo real, lo que elimina la necesidad de técnicas de sondeo largo y poco fiables.
Diseño uniforme y extensible
Muy fácil de usar, con un diseño uniforme, intuitivo y extensible. Interfaces de componentes comunes entre plataformas y tecnologías.
Componentes totalmente integrados
Componentes de software nativos para cualquier tecnología de desarrollo compatible, sin dependencias de bibliotecas externas.
Rendimiento increíblemente rápido
Basado en una arquitectura de socket asincrónica optimizada que se ha perfeccionado activamente durante más de dos décadas.
Soporte técnico excepcional
Respaldado por un equipo experto de profesionales de soporte. Soporte por correo electrónico gratuito e ilimitado o opciones de soporte premium pagado.
Características del producto
Proporciona un marco estándar para comunicaciones bidireccionales continuas.
Soporte para conexiones WebSocket (ws: //) y WebSocket Secure (wss: //) con cifrado SSL fuerte de hasta 256 bits y certificados digitales.
Autentique y cifre / descifre los datos enviados y recibidos utilizando TLS 1.3, TLS 1.2, 1.1 o 1.0.
Las capacidades avanzadas de certificados digitales le permiten crear, firmar y administrar certificados digitales X.509. Conviértase en su propia autoridad certificadora.
Un marco unificado completo con un modelo de objetos común y fácil de aprender e interfaces simplificadas que lo ayudan a hacer más.
Los componentes son seguros para subprocesos en miembros críticos.
Rápido, robusto, confiable: los componentes consumen recursos mínimos.
Componentes de desarrollo nativos para todas las plataformas y tecnologías de componentes compatibles.
Componentes sólidos como una roca rigurosamente probados que se han sometido a cientos de miles de horas de pruebas tanto internamente por nuestro equipo de control de calidad como externamente a través de las instalaciones del cliente.
Documentación de referencia detallada, aplicaciones de muestra, archivos de ayuda totalmente indexados y una amplia base de conocimientos en línea.
Respaldado por soporte profesional de varios niveles, incluido soporte por correo electrónico gratuito y soporte de pago a nivel empresarial.
Componentes nativos de Delphi VCL sin dependencias externas. Cuenta con los mismos componentes confiables que vienen con otras ediciones, disponibles como componentes nativos de Delphi VCL para un rendimiento real de Delphi.
El componente WebSocketServer se utiliza para crear un servidor WebSocket.
¿Te gusta lo que ves? La biblioteca IPWorks WebSockets de / n Software y cientos de otros componentes se incluyen con nuestro Enterprise Component Pack . Por tiempo limitado, cuando compre RAD Studio Enterprise o Architect Edition a un precio de actualización especial, también obtendrá este paquete de software de terceros con un valor de más de $ 13,000, ¡SIN COSTO ADICIONAL! ¡Actualice a RAD Studio 10.4.1 hoy mismo!
RAD Studio имеет отличную экосистему сторонних поставщиков, предоставляющих как компоненты, так и плагины IDE, программное обеспечение, которое находится в самой среде IDE. В RAD Studio 10.4.1 мы внесли некоторые изменения в поддержку закрепленных окон (закрепленное окно — это панель, такая как представления структуры, проектов или палитры). Это изменение может создавать проблемы с подключаемыми модулями IDE, которые используют свои собственные настраиваемые закрепленные окна.
Одним из примеров такого плагина являются закладки, распространяемые Embarcadero, а также другие часто используемые сторонние плагины.
Проблема может проявляться как нарушение доступа при использовании подключаемых модулей или IDE, или как пристыкованное окно подключаемого модуля, которое отображается в неожиданное время, и, возможно, другое неожиданное поведение.
Проблема решена, когда плагин перекомпилирован для 10.4.1, и мы общаемся с нашими техническими партнерами, чтобы попросить их распространить обновленную перекомпилированную версию любого плагина IDE, который они распространяют для 10.4.1. Мы также обновим наш собственный плагин IDE, Bookmarks, до версии, совместимой с 10.4.1, которую можно загрузить через GetIt. Он должен быть доступен в течение 24 часов.
Мы рекомендуем удалить плагины IDE перед обновлением до 10.4.1, а затем повторно установить версии 10.4.1 ваших плагинов IDE.
RAD Studio tiene un gran ecosistema de proveedores externos, que proporciona componentes y complementos IDE, software que vive dentro del propio IDE. En RAD Studio 10.4.1, hicimos algunos cambios en nuestro soporte de ventanas acopladas (una ventana acoplada es un panel como las vistas Estructura, Proyectos o Paleta). Este cambio puede crear problemas con los complementos IDE que usan sus propias ventanas acopladas personalizadas.
Un ejemplo de dicho complemento son los marcadores, distribuidos por Embarcadero, pero también otros complementos de terceros de uso común.
El problema puede aparecer como violaciones de acceso al usar los complementos o IDE, o como la ventana acoplada del complemento que se muestra en un momento inesperado y posiblemente otro comportamiento inesperado.
El problema se resuelve cuando el complemento se vuelve a compilar para 10.4.1 y nos comunicamos con nuestros socios tecnológicos para solicitarles que distribuyan una versión recompilada actualizada de cualquier complemento IDE que distribuyan para 10.4.1. También actualizaremos nuestro propio complemento IDE, Marcadores, a una versión compatible con 10.4.1, descargable a través de GetIt. Esto debería estar disponible dentro de las 24 horas.
Recomendamos desinstalar los complementos IDE antes de actualizar a 10.4.1 y luego volver a instalar las versiones 10.4.1 de sus complementos IDE.
Erstellen Sie mit dem Internet verbundene Echtzeitanwendungen mit Unterstützung für WebSockets. IPWorks WebSockets enthält eine Reihe leistungsstarker Komponenten zur Integration von WebSocket-Kommunikationsfunktionen in Web-, Desktop- und Mobilanwendungen. Die Komponenten eignen sich perfekt zum Erstellen von Anwendungen mit Internetverbindung, für die Echtzeitdaten erforderlich sind, darunter Chat, Multiplayer-Spiele, Live-Finanzanwendungen und mehr!
WebSockets-Integrationskomponenten
IPWorks WebSockets enthält eine Reihe leistungsstarker Komponenten zur Integration von WebSocket-Kommunikationsfunktionen in Web-, Desktop- und Mobilanwendungen. Die Komponenten eignen sich perfekt zum Erstellen von Anwendungen mit Internetverbindung, für die Echtzeitdaten erforderlich sind, darunter Chat, Spiele für mehrere Spieler, Live-Finanzanwendungen und mehr
WebSocket Server und Proxy
Leistungsstarke Server- und Proxy-Komponenten zur Integration der Live-Webkommunikation.
Beseitigen Sie langes Polling
Mit WebSockets kann der Server Daten in Echtzeit an verbundene Clients senden, sodass keine unzuverlässigen Techniken für lange Abfragen erforderlich sind.
Einheitliches und erweiterbares Design
Sehr einfach zu bedienen, mit einem einheitlichen, intuitiven und erweiterbaren Design. Gemeinsame Komponentenschnittstellen über Plattformen und Technologien hinweg
Voll integrierte Komponenten
Native Softwarekomponenten für jede unterstützte Entwicklungstechnologie – ohne Abhängigkeit von externen Bibliotheken.
Blitzschnelle Leistung
Basierend auf einer optimierten asynchronen Socket-Architektur, die seit mehr als zwei Jahrzehnten aktiv weiterentwickelt wird.
Hervorragender technischer Support
Unterstützt von einem Expertenteam von Support-Fachleuten. Unbegrenzter, kostenloser E-Mail-Support oder kostenpflichtige Premium-Support-Optionen.
Produktmerkmale
Bietet einen Standardrahmen für die fortlaufende bidirektionale Kommunikation.
Unterstützung für WebSocket (ws: //) und WebSocket Secure (wss: //) Verbindungen mit bis zu 256 Bit starker SSL-Verschlüsselung und digitalen Zertifikaten.
Authentifizieren und verschlüsseln / entschlüsseln Sie Daten, die mit TLS 1.3, TLS 1.2, 1.1 oder 1.0 gesendet und empfangen wurden.
Mit den erweiterten Funktionen für digitale Zertifikate können Sie digitale X.509-Zertifikate erstellen, signieren und verwalten. Werden Sie Ihre eigene Zertifizierungsstelle.
Ein vollständig einheitliches Framework mit einem gemeinsamen, leicht zu erlernenden Objektmodell und vereinfachten Schnittstellen, mit denen Sie mehr erledigen können.
Komponenten sind auf kritischen Elementen threadsicher.
Schnell, robust, zuverlässig – die Komponenten verbrauchen nur minimale Ressourcen.
Native Entwicklungskomponenten für alle unterstützten Plattformen und Komponententechnologien.
Streng getestete, grundsolide Komponenten, die sowohl intern von unserem QS-Team als auch extern durch Kundeninstallationen hunderttausenden Stunden lang getestet wurden.
Detaillierte Referenzdokumentation, Beispielanwendungen, vollständig indizierte Hilfedateien und eine umfangreiche Online-Wissensdatenbank.
Unterstützt durch mehrstufigen professionellen Support, einschließlich kostenlosem E-Mail-Support und kostenpflichtigem Support auf Unternehmensebene.
Native Delphi VCL-Komponenten ohne externe Abhängigkeiten. Es enthält dieselben vertrauenswürdigen Komponenten wie andere Editionen, die als native Delphi VCL-Komponenten für echte Delphi-Leistung verfügbar sind.
Die WebSocketServer-Komponente wird zum Erstellen eines WebSocket-Servers verwendet.
Gefällt dir was du siehst? / n Die IPWorks WebSockets Library von Software und Hunderte anderer Komponenten sind in unserem Enterprise Component Pack enthalten . Wenn Sie RAD Studio Enterprise oder Architect Edition zu einem speziellen Upgrade-Preis erwerben, erhalten Sie für eine begrenzte Zeit dieses Paket von Software von Drittanbietern im Wert von über 13.000 US-Dollar ohne zusätzliche Kosten! Steigen Sie noch heute auf RAD Studio 10.4.1 um!
Onze heroriëntatie op kwaliteitsborging en bugfixes voor C ++ Builder is nog nooit zo duidelijk geweest als in 10.4.1. Hoewel we uw geduld op prijs stellen, beschouwen we het niet als vanzelfsprekend. We hebben nog nooit zo veel energie gehad om voort te bouwen op de solide basis van C ++ Builder en zullen deze push in latere releases het hele jaar door voortzetten.
Enkele hoogtepunten uit deze release:
De Win64 debugger, gebaseerd op LLDB, heeft een aantal belangrijke kwaliteitsverbeteringen en features gehad. Het heeft nu bijvoorbeeld sterk verbeterde prestaties voor applicaties met honderden threads; verbeteringen in het omgaan met uitzonderingen, met name OS-uitzonderingen; verwerkt geheugenveranderingen in complexe variabelen (bijv., als het item waarnaar een aanwijzer verwijst verandert, wordt dat weerspiegeld in de IDE); en vele andere reparaties op verschillende gebieden, evenals het verkrijgen van een nieuwe formatter (visualiser) voor unique_ptr.
De Win64-linker (ilink64) heeft een aantal verbeteringen in de geheugenverwerking, die klanten die geheugenproblemen tegenkomen, vooral bij debug-builds zouden moeten helpen
Belangrijke kwaliteitsfixes in de hele toolchain, variërend van Midas tot het afhandelen van uitzonderingen tot RTTI tot het afhandelen van uitzonderingen tot stabiliteit.
Ons doel is om C ++ Builder terug te brengen naar een stabiele en efficiënte IDE. Als we eenmaal vertrouwd zijn met die basis, zullen we onze aandacht richten op grotere en betere dingen. We hopen de voltooiing van de code bij te werken en de Win64-linker volledig te vervangen in het komende jaar, wat een veel betere in-IDE-productiviteit zal opleveren en u zal helpen bij het koppelen van grote projecten. Houd meer nieuws in de gaten als 10.4.2 aan het licht komt.
Status van Visual Assist-integratie in RAD Studio
Op onze roadmap staat de integratie van Visual Assist in C ++ Builder. We concentreren ons eerst op de belangrijkste functies, zoals het aanvullen van code, het vinden van referenties, navigatie en refactoren, als kandidaten voor de eerste release. Dit is aan de gang. De C ++ -parser van Visual Assist begrijpt momenteel onze C ++ -extensies (eigenschappen, sluitingen, enz.) En we onderzoeken verschillende benaderingen van IDE-integratie. Kijk voor meer informatie over Visual Assist op https://www.wholetomato.com/features. Probeer het eens in Visual Studio en als er functies zijn die u graag wilt gebruiken voor C ++ Builder, stuur ons dan een functieverzoek.
C ++ bibliotheken
We werken aan het verbeteren van de compatibiliteit van C ++ Builder en we zien zeer goede resultaten. U herinnert zich misschien uit een eerdere blogpost dat we algemene open source C ++ -bibliotheken gebruiken en ervoor zorgen dat ze werken met C ++ Builder. (Binnenkort zullen er verschillende nieuwe op GetIt verschijnen.) Dit betekent niet alleen dat u gemakkelijker gemeenschappelijke bruikbare bibliotheken tot uw beschikking hebt, maar het betekent ook dat u waarschijnlijk gemakkelijker elke C ++ -bibliotheek kunt ophalen die u wilt gebruiken.
Deze inspanningen hebben hun vruchten afgeworpen: niet alleen hebben we verschillende bibliotheken in GetIt, en er komen er binnenkort meer bij, maar het werk dat moet worden gedaan om een bibliotheek in C ++ Builder te gebruiken, is veranderd. Tegenwoordig is het meestal eenvoudig, meestal met macro’s (ifdef-s) die zijn geschreven voor MSVC of GCC om ook Embarcadero te erkennen of de juiste code in te pakken. De overgrote meerderheid van RTL of andere methoden bestaat en bibliotheken kunnen goed worden gebruikt. Vaak wordt een bibliotheek meteen samengesteld. Als u een bibliotheek heeft waarin u geïnteresseerd bent, raden we u aan deze uit te proberen met C ++ Builder 10.4.1: er kunnen misschien kleine wijzigingen worden aangebracht, maar de algehele compatibiliteit zou aanzienlijk moeten worden verbeterd.
Desktop UX Summit
In het afgelopen decennium is applicatieontwerp sterk gericht geweest op mobiele of webapps, en webdesign heeft het applicatieontwerp sterk beïnvloed – vaak ten nadele ervan. Een desktop- of mobiele applicatie is geen website.
Dit jaar is de eerste Desktop UX Summit – een gratis online conferentie over het ontwerpen van desktoptoepassingen, van een grote verscheidenheid aan sprekers die vaak niet verbonden zijn met of geen gebruik maken van Embarcadero-technologieën. We willen ontwikkelaars in het algemeen bewust maken van het ontwerp van desktoptoepassingen, niet alleen onze eigen klanten. Het heeft een aantal geweldige sessies en is gratis! Markeer dus uw agenda’s voor 16 en 17 september en bezoek https://summit.desktopfirst.com om u te registreren!
Nieuwe gratis tool: Dev C ++
Ander nieuws, aangezien we nieuw leven inblazen om kwaliteitsgereedschappen voor C ++ -ontwikkeling te produceren, willen we u kennis laten maken met onze nieuwste Open Source-teksteditor met een kleine footprint, Embarcadero Dev-C ++.
Embarcadero Dev-C ++ is een nieuwe en verbeterde vork van Bloodshed Dev-C ++ en Orwell Dev-C ++. Het is een complete IDE en code-editor voor de programmeertaal C / C ++. Het gebruikt MinGW-poort van GCC (GNU Compiler Collection) als compiler. Embarcadero Dev-C ++ kan ook worden gebruikt in combinatie met Cygwin of een andere op GCC gebaseerde compiler. We konden dit verpakken met een zeer lage geheugenvoetafdruk omdat het een native Windows-applicatie is en geen gebruik maakt van Electron. Als klap op de vuurpijl werd al het werk aan het updaten van deze vork gedaan met behulp van de nieuwste versie van Embarcadero Delphi. Ga naar https://www.embarcadero.com/free-tools/dev-cpp om deze en andere gratis tools te downloaden
C ++ News wereldwijd
Eindelijk een overzicht van recent C ++ nieuws en blogposts!
MeetingC ++, een van de beste C ++ -conferenties, is dit jaar online . Early bird-tickets worden uitgevoerd in de Midden-Europese tijdzone en kosten € 49.
De jaarlijkse LLVM (Clang, LLDB) bijeenkomst staat ook dit jaar online. Tickets zijn gratis, maar je kunt ook een betaald supporterticket kopen.
‘The problem with C’ : een heel interessante post van cor3ntin over hoe de talen uiteenlopen en wat C-compatibiliteit betekent voor C ++
David Ik heb een geweldige blogpost geschreven waarin ik enkele Boost-klassen met C ++ Builder gebruikte . (Een recente versie van Boost bevindt zich in GetIt.) Het toont met name de circulaire bufferklasse. Boost zit vol met handige tools en het is geweldig om te zien dat een aantal ervan wordt gemarkeerd.
Adecc Systemshaus schrijft een C ++ – blog . Er zijn enkele geweldige berichten, met name over het gebruik van standaard C ++ – streams, zoals C ++ – streams met een TListView .
Incredibuild, een geweldig build-systeem voor het distribueren van C ++ builds over machines, heeft een poll over je favoriete C ++ IDE en op het moment van schrijven zijn Visual Studio, C ++ Builder en ‘Other’ elk met ongeveer 30% verbonden.
Ten slotte is C ++ 20 voltooid! Lees hier meer over de blog van Herb Sutter .
Наши усилия по переориентации на обеспечение качества и исправления ошибок для C ++ Builder никогда не были более ясными, чем в 10.4.1. Мы ценим ваше терпение, но не воспринимаем это как должное. Мы никогда не были так заинтересованы в том, чтобы опираться на прочный фундамент C ++ Builder, и продолжим это движение в последующих выпусках в течение года.
Отладчик Win64, основанный на LLDB, имеет несколько важных улучшений качества и функций. Например, теперь значительно улучшена производительность приложений с сотнями потоков; улучшения обработки исключений, особенно исключений ОС; обрабатывает изменения памяти в сложных переменных (например, если элемент, на который указывает указатель, изменяется, это будет отражено в IDE); и многие другие исправления в различных областях, а также получение нового средства форматирования (визуализатора) для unique_ptr.
Компоновщик Win64 (ilink64) имеет ряд улучшений в обработке памяти, которые должны помочь клиентам, которые сталкиваются с проблемами нехватки памяти, особенно с отладочными сборками.
Важные исправления качества во всей цепочке инструментов, от Midas до обработки исключений и RTTI до обработки исключений и стабильности.
Наша цель — вернуть C ++ Builder в стабильную и эффективную IDE. Как только мы освоимся с этой основой, мы обратим наше внимание на большие и лучшие вещи. Мы надеемся обновить автозавершение кода и полностью заменить компоновщик Win64 в течение следующего года, что обеспечит гораздо лучшую производительность в среде IDE, а также поможет вам связать большие проекты. Следите за новостями, когда появится 10.4.2.
Статус интеграции Visual Assist в RAD Studio
В нашей дорожной карте — интеграция Visual Assist в C ++ Builder. В первую очередь мы сосредотачиваемся на его основных функциях, таких как завершение кода, поиск ссылок, навигация и рефакторинг, как кандидаты на первый выпуск. Это происходит. Синтаксический анализатор C ++ Visual Assist в настоящее время понимает наши расширения C ++ (свойства, замыкания и т. Д.), И мы исследуем несколько подходов к интеграции IDE. Чтобы узнать больше о Visual Assist, посетите https://www.wholetomato.com/features . Попробуйте в Visual Studio, и если есть функции, которые вы хотите, чтобы мы включили в C ++ Builder, отправьте нам запрос на добавление функции.
Библиотеки C ++
Наша работа по увеличению совместимости с C ++ Builder продолжается, и мы видим очень хорошие результаты. Возможно, вы помните из предыдущего сообщения в блоге, что мы берем общие библиотеки C ++ с открытым исходным кодом и обеспечиваем их работу с C ++ Builder. (Несколько новых скоро появятся на GetIt.) Это не только означает, что у вас есть более доступные общие полезные библиотеки, но и то, что вы с большей вероятностью сможете легко подключить любую библиотеку C ++, которую хотите использовать.
Эти усилия принесли свои плоды: в GetIt появилось не только несколько библиотек, которые скоро появятся, но и работа по использованию библиотеки в C ++ Builder изменилась. В наши дни это обычно просто, в основном это обработка макросов (ifdef-s), написанных для MSVC или GCC, чтобы также подтвердить Embarcadero или обернуть правильный код. Подавляющее большинство RTL или других методов существует, и библиотеки можно использовать. Часто библиотека компилируется сразу. Если у вас есть интересующая вас библиотека, мы предлагаем попробовать ее с помощью C ++ Builder 10.4.1: могут быть внесены небольшие изменения, но общая совместимость должна быть значительно улучшена.
Desktop UX Summit
В последнее десятилетие дизайн приложений был в значительной степени ориентирован на мобильные или веб-приложения, и веб-дизайн сильно повлиял на дизайн приложений — часто в ущерб. Настольное или мобильное приложение — это не веб-сайт.
В этом году проходит первый Саммит Desktop UX Summit — бесплатная онлайн-конференция по дизайну настольных приложений с участием самых разных докладчиков, часто не связанных с технологиями Embarcadero или не использующих их. Мы хотим донести информацию о дизайне настольных приложений до разработчиков в целом, а не только для наших клиентов. У него есть отличные сеансы, и он бесплатный! Так что отметьте свои календари на 16 и 17 сентября и посетите https://summit.desktopfirst.com, чтобы зарегистрироваться!
Новый бесплатный инструмент: Dev C ++
Из других новостей, поскольку мы вновь воодушевлены созданием качественных инструментов для разработки на C ++, мы хотели бы познакомить вас с нашим последним текстовым редактором с открытым исходным кодом, занимающим мало места, Embarcadero Dev-C ++.
Embarcadero Dev-C ++ — это новый и улучшенный форк Bloodshed Dev-C ++ и Orwell Dev-C ++. Это полнофункциональная IDE и редактор кода для языка программирования C / C ++. В качестве компилятора он использует порт MinGW для GCC (GNU Compiler Collection). Embarcadero Dev-C ++ также можно использовать в сочетании с Cygwin или любым другим компилятором на основе GCC. Нам удалось упаковать это с очень низким объемом памяти, потому что это собственное приложение Windows и не использует Electron. В довершение всего, вся работа по обновлению этой вилки проводилась с использованием последней версии Embarcadero Delphi. Чтобы загрузить этот и другие бесплатные инструменты, перейдите по ссылке https://www.embarcadero.com/free-tools/dev-cpp.
Новости C ++ по всему миру
Наконец, обзор последних новостей о C ++ и сообщений в блогах!
В этом году онлайн— конференция MeetingC ++, одна из лучших конференций по C ++ . Билеты ранней пташки, работающие в часовом поясе Центральной Европы, стоят 49 евро.
Ежегодное собрание LLVM (Clang, LLDB) также в сети в этом году. Билеты бесплатные, хотя вы также можете купить платный билет болельщика.
‘Проблема с C’ : действительно интересный пост от cor3ntin о том, как языки расходятся и что означает совместимость с C для C ++
Дэвид. Я написал отличную запись в блоге, в которой демонстрирует использование некоторых классов Boost с C ++ Builder . (Последняя версия Boost находится в GetIt.) Примечательно, что он показывает класс кольцевого буфера. Boost полон полезных инструментов, и приятно видеть, что некоторые из них выделены.
Adecc Systemshaus ведет блог на C ++ . Есть несколько замечательных сообщений, особенно об использовании стандартных потоков C ++, таких как потоки C ++ с TListView .
Incredibuild, отличная система сборки для распространения сборок C ++ по машинам, провела опрос по вашей любимой среде IDE C ++, и на момент написания Visual Studio, C ++ Builder и «Другое» были связаны примерно по 30% каждая.
Even years after switching from centralized version control to a decentralized one (or to be more precise: from Subversion to Mercurial) I still find myself stuffing different unrelated changes into one commit. Sometimes there are just some small changes to make the code compile in your current environment, sometimes you correct typos in the comments or string resources, sometimes it is just a better formatting of the code. All of these changes have nothing to do with the current bug to fix or the current feature to implement, but they end up being part of the commit – and that is just plain wrong! When it comes to commits, make them as small as possible!
A changeset is no suitcase to be packed with clothes for all types of occasions. It should have been created for one purpose only in the first place. While it is totally OK to have multiple changesets for one bugfix or feature request or refactoring, it is just bad practice to have one changeset targeting more than one of those.
Let’s say you are fixing a bug in the release branch and then want to merge it into the develop branch. There is a much lesser chance to encounter any merge conflicts when the changeset only touches the code necessary to fix the bug. Keeping the changeset small can make the difference between an automatic merge and the need for manually resolving conflicts.
If you are working in an environment that allows to connect the changesets to the corresponding bug or feature in your issue tracker, you can query for those changesets to get an estimate for a similar bug or feature. Some months later it may be quite difficult to separate the unrelated changes from the relevant ones just by analyzing the changeset – even if it is yourself doing that analysis.
There are different ways to help you keeping your commits clean. The obvious one is just skipping files that don’t belong to the change in question. If the change was not done intentionally (be it by yourself or automatically by the IDE), either revert it immediately or keep the change and uncheck the file for the commit. Sometimes the latter may be necessary just to satisfy the IDE.
The next step is to get into the files content and select only those changes that are necessary. Mercurial and Git (and probably others, too) make this a simple click on a checkbox.
As a nice side effect you get short commit comments usually fitting in one line, which makes them fully visible in the history.
I am constantly getting better in this. What about you?
Delphi 10.4.1 es una versión centrada en la calidad, ¡y esto también se aplica a la finalización del código! Además de ayudarlo a imitar el comportamiento clásico de finalización de código, hemos corregido y ajustado muchos elementos.
Cuando se lanzó RAD Studio 10.4, rediseñamos Code Insight. Si bien la información sobre el código antiguo (“clásico”) todavía está disponible como configuración, de forma predeterminada, Delphi ahora usa una tecnología asincrónica sin bloqueo para completar el código y funciones relacionadas. Significa que el IDE no debe detenerse mientras escribe, y puede usar la finalización de código durante la depuración, así como muchos otros beneficios (por ejemplo, resultados de búsqueda de finalización). Puede leer más sobre la tecnología aquí.
En 10.4.1 nos hemos centrado en pulir la información del código. Debido a que 10.4.1 es una versión de calidad, hay muy pocas funciones nuevas, ¡hay algunas configuraciones nuevas! – pero la mayor parte del trabajo consiste en resolver errores y modificar el comportamiento. A continuación, se incluyen algunas cosas que quizás le interese saber: nuevas configuraciones, correcciones de claves y algunas notas especiales para proyectos muy grandes.
Nueva configuración de finalización de código
Partidos de subrayado
La finalización de código nuevo de 10.4 muestra más resultados que la finalización clásica anterior, al enumerar también elementos no solo que comienzan con lo que escribió (el texto “filtro”), sino que contienen lo que escribió. (En 10.4.1, hay una configuración para controlar que incluye estos elementos adicionales: consulte a continuación).
Esto es útil porque le permite explorar y buscar en la lista completa escribiendo. A veces, puede ser difícil ver por qué se incluye un resultado en particular en la lista, por lo que en 10.4.1 la parte correspondiente de un símbolo ahora está subrayada.
Invocar la finalización del código en 10.4.1 con la configuración predeterminada muestra el texto del filtro coincidente subrayado
En esta captura de pantalla, puede ver que se incluyó “ScaleFactor” porque contiene “act”.
Puede desactivar el subrayado en el cuadro de diálogo Opciones; consulte la sección siguiente para obtener información sobre las nuevas configuraciones.
Obtener el comportamiento de finalización de código clásico
De forma predeterminada, la finalización de código nuevo no copia completamente el comportamiento de finalización de código clásico anterior. Muestra más resultados y utiliza un algoritmo diferente para seleccionar automáticamente el mejor elemento de la lista.
En 10.4.1, agregamos cuatro configuraciones, que combinadas le permiten obtener exactamente el mismo comportamiento que la finalización clásica. Los cuatro están en la nueva pestaña Opciones de Insight de la interfaz de usuario> Editor> página Fuente en el cuadro de diálogo Opciones.
Cuatro nuevas configuraciones de finalización de código
“El texto del filtro está subrayado” controla la nueva función para subrayar la coincidencia, como se indicó anteriormente.
Para imitar la finalización clásica del código, puede cambiar la configuración de esta manera:
‘Lista todos los símbolos que comienzan con el filtro primero’: en
‘Seleccionar símbolo coincidente más corto’: desactivado (en su lugar, selecciona el más cercano en alcance)
‘El texto del filtro está subrayado’: desactivado
‘Mostrar símbolos que contienen filtro’: desactivado (aunque recomendamos mantenerlo activado ; agrega más resultados útiles)
Correcciones de calidad clave
La página 10.4.1 Novedades enumera muchas correcciones y le recomiendo que lea la lista. Sin embargo, algunos que merecen especial atención son:
Se mejoran tanto el uso de la memoria como el rendimiento. El servidor de idiomas debería utilizar menos memoria y debería ser más rápido
Algunas mejoras dirigidas específicamente a proyectos muy grandes (y gracias a nuestros probadores beta aquí)
Los paquetes han mejorado mucho: hay varios puntos en la documentación
Error Insight (“garabatos rojos”) a veces tenía un retraso en la actualización cuando se resolvía un error y, a veces, la longitud de la línea ondulada roja era incorrecta; ambos son arreglados
Se solucionaron los problemas en los que la información sobre herramientas / Help Insight no siempre mostraba información completa
Se mencionan anteriormente algunos cambios para proyectos muy grandes. Aquí hay una cita de uno de nuestros clientes sobre 10.4.1, que amablemente me han permitido compartir:
¡Felicitaciones al equipo de LSP!
Ahora he logrado abrir y ejecutar nuestra aplicación insignia en 10.4.1. Y ¡magia! – La finalización del código finalmente funciona en nuestra unidad principal con IFDEF. Creo que la última vez que funcionó la finalización del código hubo alrededor de D5 veces …
Se tarda unos 15 segundos en funcionar por primera vez (probablemente alimenta una enorme cantidad de unidades al LSP), ¡pero después de eso es un placer usarlo!
¡Gracias!
Esta aplicación en particular tiene poco menos de 3 millones de líneas de código. Y es la primera vez que Code Insight funciona en esa ubicación desde Delphi 5.
En cada lanzamiento, continuamos mejorando Delphi y C ++ Builder. 10.4.1 es notable porque es una versión centrada en la calidad. Continuaremos mejorando y cambiando el IDE cada vez que enviemos una nueva versión, y esperamos que la información del código corrija por sí sola, y mucho menos los más de 800 errores corregidos , haga de 10.4.1 una versión muy útil para instalar.
In our previous tutorial, we have explained how to create REST API in Golang. In this tutorial we will explain how to work with regular expressions in Golang.
A regular expression or regex is a sequence of characters to define a search pattern. It’a powerful technique used in most of programming language. Instead of writing many code lines, regex provides fast solution to handle everything in a single line of code to search, replace, extract, pattern matching etc.
In Golang, there are built-in package regexp for regular expressions. The package uses RE2 syntax standards that is also used by other languages like Python, C, and Perl. The package contains functions to deal with regular expressions. The major regex functions are MatchString(), FindString(), FindStringIndex(), FindStringSubmatch(), FindStringSubmatchIndex(), Compile(), MustCompile() and more.
So here in this tutorial, we will explain regex functions with examples to work with regular expressions.
1. MatchString Method
We can use the method regexp.MatchString() from regex package for matching substring. Here in below example code, we want test if string starts with G character. We will use caret(^) to match the beginning of a text in a string. so we will use substring as “^G” to match in a string.
when we run the above example code, it will match the substring and return true. So it will display the output like below:
Match: true Error: <nil>
2. Compile or MustCompile method
We can use the Compile() or MustCompile() method to create the object of regex. The Compile() method return error if there regular expression is invalid while MustCompile() run without any error when there are invalid regular expression. So its always recommended to use Compile() to create regex object. We can use these like below.
We can use the FindString() method to get the result of the first match. If there are no match, the return value is an empty string. Here in below example code, we will match the text ple to exit at end of string. If string match then it will return the match otherwise return empty string. In this example code, we have also used method Compile() to create a Regexp object. If we don’t want to get an error, we can use MustCompile() method for the same.
when we run the above code, it display the following match string.
Match: ple Error: <nil>
4. FindStringIndex method
We can use the method FindStringIndex() to get the the starting and ending index of the leftmost match of the regular expression. If there no match then it will return null value.
In below example code, we will find index of text p in a string.
when we will run above example code, it will return following output.
Match: [17 18] Error: <nil>
5. FindStringSubmatch method
We can use the method FindStringSubmatch() to find the leftmost substring that matches the regex pattern. If there are no match then it will return a null value.
El FREE Desktop First UX Summit se acerca rápidamente los días 16 y 17 de septiembre. Esta es su oportunidad de aprender de los mejores expertos y profesionales de la industria de UI y UX y llevar sus aplicaciones de escritorio al siguiente nivel. Vaya más allá del diseño Mobile-First de simplemente extender una pequeña interfaz de usuario para llenar una pantalla de escritorio. Descubra cómo aprovechar al máximo la plataforma más potente y hacer que sus usuarios sean más productivos.
Excited to be presenting the keynote at Desktop First UX Summit!
¿Mencioné que el evento es 100% online y completamente gratuito? Con las sesiones bajo demanda, puede ver las sesiones que desee, cuando las desee. Querrá asistir a las discusiones de preguntas y respuestas de la mesa redonda en vivo para que nuestro panel de expertos responda sus preguntas.
Tenemos una increíble lista de oradores y agregamos más cada día. En caso de que no esté familiarizado con el increíble Mark Miller de CodeRush y la fama de DevExpress, en los días de las conferencias físicas de la BorCon, las sesiones de Mark eran las únicas en las que debía estar. No era raro que estuvieran llenos y que la gente se lo perdiera. ¡Con esta cumbre en línea eso no sucederá!
Learn the essence of a great design as well as important, proven guidelines based on human biology and cognitive science. Join Mark Miller on Desktop First UX Summit on September 16! Book your ticket for FREE: https://t.co/nHxZ5Z46TB#DesktopFirstpic.twitter.com/i94lacIWLy
También puede notar algunas otras caras desconocidas en la línea de altavoces. Nos hemos comunicado con la comunidad de diseño y experiencia del usuario de lager. ¡Una gran oportunidad para disfrutar de nuevas vistas y aprender algo nuevo!
#DekstopFirst Join Jeff Gothelf in this talk as he covers how technology can enable tremendous gains, but when coupled with the uncertainty of human behavior, can often go awry. Save your seat. Book your FREE ticket now: https://t.co/UHOMHdAFSCpic.twitter.com/cFLcfWB9SJ
React is the world’s most popular JavaScript framework, but it’s not cool because it’s popular. It’s popular because it’s cool. Most React introductions jump right into showing you examples of how to use React, and skip the “why”.
That’s cool. If you want to jump in and start playing with React right away, the official documentation has lots of resources to help you get started.
This post is for people who want an answer to the questions, “why React? Why does React work the way it works? What is the purpose of the API designs?”
Why React?
Life is simpler when UI components are unaware of the network, business logic, or app state. Given the same props, always render the same data.
When React was first introduced, it fundamentally changed how JavaScript frameworks worked. While everyone else was pushing MVC, MVVM, etc, React chose to isolate view rendering from the model representation and introduce a completely new architecture to the JavaScript front-end ecosystem: Flux.
Why did the React team do that? Why was it better than the MVC frameworks (and jQuery spaghetti) that came before?
In the year 2013, Facebook had just spent quite a bit of effort integrating the chat feature: A feature that would be live and available across the app experience, integrating on virtually every page of the site. It was a complex app within an already complex app, and uncontrolled mutation of the DOM, along with the parallel and asynchronous nature of multi-user I/O presented difficult challenges for the Facebook team.
For instance, how can you predict what is going to be rendered to the screen when anything can grab the DOM and mutate it at any time for any reason, and how can you prove that what got rendered was correct?
You couldn’t make those guarantees with any of the popular front-end frameworks prior to React. DOM race conditions were one of the most common bugs in early web applications.
“Non-determinism = parallel processing + mutable state” — Martin Odersky
Job #1 of the React team was to fix that problem. They did that with two key innovations:
Unidirectional data binding with the flux architecture.
Component state is immutable. Once set, the state of a component can’t be changed. State changes don’t change existing view state. Instead, they trigger a new view render with a new state.
“The simplest way that we have found, conceptually, to structure and render our views, is to just try to avoid mutation altogether.” — Tom Occhino, JSConfUS 2013
With flux, React tamed the uncontrolled mutation problem. Instead of attaching event listeners to any arbitrary number of arbitrary objects (models) to trigger DOM updates, React introduced a single way to manipulate a component’s state: Dispatch to a store. When the store state changes, the store will ask the component to re-render.
Flux architecture
When I’m asked “why should I care about React”, my answer is simple: Because we want deterministic view renders, and React makes that a lot easier.
Note: It is an anti-pattern to read data from the DOM for the purpose of implementing domain logic. Doing so defeats the purpose of using React. Instead, read data from your store and make those choices prior to render-time.
If deterministic render was React’s only trick, it would still be an amazing innovation. But the React team wasn’t done innovating. They launched with several more killer features, and over the years, they’ve added even more.
JSX
JSX is an extension to JavaScript which allows you to declaratively create custom UI components. JSX has important benefits:
Easy, declarative markup.
Colocated with your component.
Separate by concern, (e.g., UI vs state logic, vs side-effects) not by technology (e.g., HTML, CSS, JavaScript).
Prior to JSX, if you wanted to write declarative UI code, you had to use HTML templates, and there was no good standard for it at the time. Every framework used their own special syntax you had to learn to do things like loop over data, interpolate variables, or do conditional branching.
Today, if you look at other frameworks, you still have to learn special syntax like the *ngFor directive from Angular. Since JSX is a superset of JavaScript, you get all of JavaScript’s existing features included in your JSX markup.
You can iterate over items with Array.prototype.map, use logic operators, branch with ternary expressions, call pure functions, interpolate over template literals, or generally anything else a JavaScript expression can do. In my opinion, this is a huge advantage over competing UI frameworks.
There are a couple rules you may struggle with at first:
The class attribute becomes className in JSX.
For every item in a list of items you want to display, you need a stable, unique identifier to use for the JSX key attribute. The key must not change when items are added or removed. In practice, most list items have unique ids in your data model, and those usually work great as keys.
React didn’t prescribe a single solution for CSS. You can pass a JavaScript style object to the style property, in which case, many common style names are converted to camelCase for the object literal form, but there are other options. I mix and match a couple different solutions, depending on the scope I want for the style I’m applying: global styles for theming and common layouts, and local scoped for this component only.
Here are my favorite options:
CSS files can be loaded in your page header for common global layouts, fonts, etc. They work fine.
CSS modules are locally scoped CSS files that you can import directly in your JavaScript files. You’ll need a properly configured loader. Next.js enables this by default.
styled-jsx lets you declare styles inline in your React components, similar to how <style> tags work in HTML. The scope for those styles is hyper-local, meaning that only sibling tags and their children will be affected by the styles. Next.js also enables styled-jsx by default.
Synthetic Events
React provides a wrapper around the DOM events called synthetic events. They are very cool for several reasons. Synthetic events:
Smooth over cross-platform differences in event handling, making it easier to make your JS code work in every browser.
Are automatically memory managed. If you were going to make an infinitely scrolling list in raw JavaScript + HTML, you would need to delegate events or hook and unhook event listeners as elements scroll on and off the screen in order to avoid memory leaks. Synthetic events are automatically delegated to the root node, meaning React developers get event memory management for free.
Note: Prior to React v17, it’s not possible to access synthetic event properties in asynchronous functions because of event pooling. Instead, grab the data you need from the event object and reference it in your closure environment. Event pooling was removed in v17 because browser optimizations take care of it.
Note: Prior to v17, synthetic events were delegated to the document node. After v17, synthetic events are delegated to the React root node.
Component Lifecycle
The React component lifecycle exists to protect component state. Component state must not be mutated while React is drawing the component. Instead, a component gets into a known state, draws, and then opens up the lifecycle for effects, state updates, and events.
Understanding the lifecycle is key to understanding how to do things the React way, so you won’t fight with React, or accidentally defeat the purpose of using in the first place by improperly mutating or reading state from the DOM.
Beginning at React 0.14, React introduced class syntax to hook into React’s component lifecycle. React has two different lifecycles to think about: Mounting, Updating, and Unmounting:
React Lifecycle
And then within the update lifecycle, there are three more phases:
React Update Cycle
Render — aside from calling hooks, your render function should be deterministic and have no side-effects. You should usually think of it as a pure function from props to JSX.
Pre-Commit — Here you can read from the DOM using the getSnapShotBeforeUpdate lifecycle method. Useful if you need to read things like scroll position or the rendered size of an element before the DOM re-renders.
Commit — During the commit phase, React updates the DOM and refs. You can tap into it using componentDidUpdate or the useEffect hook. This is where it’s OK to run effects, schedule updates, use the DOM, etc.
Dan Abramov made a great diagram that spells out all the details as you might see it from the React class perspective:
React Component Lifecycle Diagram by Dan Abramov (Source)
In my opinion, thinking of a component as a long-lived class is not the best mental model for how React works. Remember: React component state is not meant to be mutated. It’s meant to be replaced, and each replacement of the current state triggers a re-render. This enables what is arguably React’s best feature: Making it easy to create deterministic view renders.
A better mental model for that behavior is that every time React renders, it calls a deterministic function that returns JSX. That function should not directly invoke its own side effects, but can queue up effects for React to run.
In other words, you should think of most React components as pure functions from props to JSX.
A pure function:
Given same inputs, always returns the same output (deterministic).
Has no side-effects (e.g., network I/O, logging to console, writing to localStorage, etc.)
Note: If your component needs effects, use useEffect or call an action creator passed through props and handle the effects in middleware.
React Hooks
React 16.8 introduced a new concept: React hooks are functions that allow you to tap into the React component lifecycle without using the class syntax or directly calling lifecycle methods. Instead of declaring a class, you write a render function.
Calling a hook generally introduces side-effects — effects which allow your component to hook into things like component state and I/O. A side-effect is any state change observable outside the function other than the function’s return value.
useEffect lets you queue up effects to run at the appropriate time in the component lifecycle, which can be just after the component mounts (like componentDidMount), during the commit phase (like componentDidUpdate), or just before the component unmounts (like componentWillUnmount).
Notice how three different lifecycle methods fell out of a single React hook? That’s because instead of putting logic in lifecycle methods, hooks allow you to keep related logic together.
Many components need to hook something up when a component mounts, update it every time the component re-draws, and then clean up before the component unmounts to prevent memory leaks. With useEffect, you can do that all in one function call, instead of splitting your logic into 3 different methods, mixed with all the other unrelated logic that also needs to use those methods.
Hooks enable you to:
Write your components as functions instead of classes.
Organize your code better.
Share reusable logic between different components.
Compose hooks to create your own custom hooks (call a hook from inside another hook).
Generally speaking, you should favor function components and React hooks over class-based components. They will usually be less code, better organized, more readable, more reusable, and more testable.
Container vs Presentation Components
For better modularity and reusability of components, I tend to write my components in two parts:
Container components are components that are connected to the data store and may have side-effects.
Presentation components are mostly pure components, which, given the same props and context, always return the same JSX.
Tip: Pure components should not be confused with React.PureComponent, which is named after pure components because it’s unsafe to use it for components that aren’t pure.
Presentation components:
Don’t touch the network
Don’t save or load from localStorage
Don’t generate random data
Don’t read directly from the current system time (e.g., by calling a function like Date.now())
Don’t interact directly with the store
May use local component state for things like form inputs, as long as you can pass in an initial state so that they can be deterministically tested
That last point is why I call presentation components “mostly pure”. Once React takes control of the lifecycle, they’re essentially reading their component state from React global state. So hooks like useState and useReducer provide implicit data input (input sources that are not declared in the function signature) making them technically impure. If you want them to be really pure, you can delegate all state management responsibility to the container component, but IMO, it’s overkill as long as your component is still unit testable.
“Perfect is the enemy of good” — Voltaire
Container Components
Container components are components which handle state management, I/O, and any other effects. They should not render their own markup — instead, they delegate rendering to the presentation component they wrap. Typically, a container component in a React+Redux app would simply invoke mapStateToProps, mapDispatchToProps, and wrap the presentation component with the result. They may also compose in many cross-cutting concerns (see below).
Higher Order Components
A Higher Order Component (HOC) is a component which takes a component and returns a component in order to compose in additional functionality.
Higher Order Components work by wrapping a component around another component. The wrapping component adds some DOM or logic, and may or may not pass additional props into the wrapped component.
Unlike React hooks and render props components, HOCs are composable using standard function composition, so you can declaratively mix in shared behavior across all your app components without those components knowing that those behaviors exist. For example, here is an HOC from EricElliottJS.com:
This mixes in all the common, cross-cutting concerns shared by all the pages on EricElliottJS.com. withEnv pulls in environment settings, withAuth adds GitHub authentication, withLoader displays a spinner while user data is loading, withLayout({ showFooter: true }) displays our default layout with a footer at the bottom of the page, withFeatures loads our feature toggle settings, withRouter loads our router, withCoupon handles magic coupon links, and withMagicLink handles our passwordless user authentication with Magic.
Almost all the pages on our site use all of those features. With this composition done in a higher order component, we can compose it into our container components with one line of code. Here’s what that would look like for our lesson page handler:
import LessonPage from '../features/lesson-pages/lesson-page.js'; import pageHOC from '../hocs/page-hoc.js';
export default pageHOC(LessonPage);
A common but miserable alternative to these kinds of HOCs is the pyramid of doom:
Repeat for every page. If you need to change this anywhere, you have to remember to change it everywhere. It should be self-evident why this sucks.
Leveraging composition for cross-cutting concerns is one of the best ways to reduce code complexity in your applications. The topic of composition is so important, I wrote a whole book on it: “Composing Software”.
Recap
Why React?Deterministic view renders, facilitated by unidirectional data binding, and immutable component state.
JSX provides easy, declarative markup in your JavaScript.
Synthetic events smooth over cross-platform events and reduce memory management headaches.
The component lifecycle exists to protect component state. It consists of mounting, updating, and unmounting, and the updating phase consists of render, pre-commit, and commit phases.
React hooks allow you to tap into the component lifecycle without using the class syntax, and also make it easier to share behaviors between components.
Container and Presentation Components allow you to isolate presentation concerns from state and effects, making both your components and business logic more reusable and testable.
Higher Order Components make it easy to share composable behaviors across many pages in your app in a way that your components don’t need to know about them (or be tightly coupled to them).
When you’ve got the foundations down and you’re ready to build real apps with React, Next.js and Vercel can automate the process of setting up your build configuration, CI/CD, and highly optimized, serverless deployment. It’s like having a full time DevOps team, but it actually saves you money instead of costing you full-time salaries.
Eric Elliott is a tech product and platform advisor, author of “Composing Software”, cofounder of EricElliottJS.com and DevAnywhere.io, and dev team mentor. He has contributed to software experiences for Adobe Systems, Zumba Fitness,The Wall Street Journal,ESPN,BBC, and top recording artists including Usher, Frank Ocean, Metallica, and many more.
He enjoys a remote lifestyle with the most beautiful woman in the world.
Estamos muy contentos de anunciar Delphi, C ++ Builder y RAD Studio 10.4 Sydney Release 1, también conocido como RAD Studio 10.4.1.
Esta nueva versión se basa en el conjunto de características de 10.4, mejorando las características existentes en todo el producto y ofreciendo una experiencia más sólida y fluida a los desarrolladores de Delphi y C ++ Builder. RAD Studio 10.4.1 tiene un fuerte enfoque en las mejoras de calidad. Las áreas clave de enfoque de calidad incluyen:
IDE
Delphi Code Insight (LSP)
Biblioteca paralela
SOAP y XML
Cadena de herramientas C ++
Mono de Fuego
VCL
Compilador de Delphi
Implementación de iOS
RAD Studio 10.4.1 incluye todas las correcciones de 10.4 Patch 1, Patch 2 y Patch 3.
Delphi 10.4.1, C ++ Builder 10.4.1 y RAD Studio 10.4.1 están disponibles para descargar a cualquier cliente de suscripción de actualización activo. En 10.4.1 hay más de 800 mejoras de calidad, incluidas más de 500 mejoras de calidad para problemas informados públicamente en el sitio del portal de calidad.
Los clientes con suscripción de actualización pueden descargar e instalar RAD Studio 10.4.1 hoy desde https://my.embarcadero.com utilizando su licencia existente.
In our previous tutorial, we have explained how to Delete Files in Golang. In this tutorial we will explain how to create REST API in Golang.
REST (Representational State Transfer) is the most used interface to populate the dynamic data and perform operations such as add, update and delete in web applications. The REST APIs are used in all kinds applications to make GET, POST, PUT or DELETE HTTP requests to perform CRUD operations on data.
In this tutorial, we are going to explain to create simple REST API with Golang to perform GET, POST, DELETE and PUT on dynamic data. We will mainly focus on the basic concepts of this, so we will perform actions on employee JSON data instead of database. So you can use this by connection with database as per your application requirement.
So let’s proceed with coding to create REST API in Golang:
1. Create Server to Handle API Requests
We need to create server to handle HTTP requests to the API. So we will create a function apiRequests() in main.go file and called within func main(). The function apiRequests() will handle all requests to the root URL.
package main
import (
"fmt"
"log"
"net/http"
)
func homePage(w http.ResponseWriter, r *http.Request){
fmt.Fprintf(w, "Welcome to the API Home page!")
}
func apiRequests() {
http.HandleFunc("/", homePage)
log.Fatal(http.ListenAndServe(":3000", nil))
}
func main() {
apiRequests()
}
When we run the above code, the API will start on post 3000. The URL http://localhost:3000/ will be loaded on browser and see the API home page welcome message and it means we have created the base of our REST API.
2. Create Employee Dummy Data for REST API
In this tutorial we will create simple REST API to perform CRUD operations GET, POST, DELETE and PUT employee data. We will create some dummy employee data in func main(). We will perform
Now we will start implementing our REST API methods. We will implement function getAllEmployees() to get all employee data.
func getAllEmployees(w http.ResponseWriter, r *http.Request) {
json.NewEncoder(w).Encode(Employees)
}
We will also handle routing and call function getAllEmployees() on action employees to get all employee in JSON format. We will use gorilla/mux based HTTP router in place of the standard library and use in this tutorial example.
We will implement employee delete functionality by creating function deleteEmployee() to delete employee by id.
func deleteEmployee(w http.ResponseWriter, r *http.Request) {
vars := mux.Vars(r)
id := vars["id"]
for index, employee := range Employees {
if employee.Id == id {
Employees = append(Employees[:index], Employees[index+1:]...)
}
}
}
We will call function deleteEmployee() on route /employee/{id} to delete employee.
When we will make HTTP request with method DELETE to URL http://localhost:3000/employee/3 with employee id. It will delete that employee.
6. Complete REST API Code
Here is the complete running code for this example.
package main
import (
"encoding/json"
"fmt"
"log"
"io/ioutil"
"net/http"
"github.com/gorilla/mux"
)
type Employee struct {
Id string `json:"Id"`
Name string `json:"Name"`
Address string `json:"Address"`
Salary string `json:"Salary"`
}
var Employees []Employee
func homePage(w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, "Welcome to the API Home page!")
}
func getAllEmployees(w http.ResponseWriter, r *http.Request) {
json.NewEncoder(w).Encode(Employees)
}
func getEmployee(w http.ResponseWriter, r *http.Request) {
vars := mux.Vars(r)
key := vars["id"]
for _, employee := range Employees {
if employee.Id == key {
json.NewEncoder(w).Encode(employee)
}
}
}
func createEmployee(w http.ResponseWriter, r *http.Request) {
reqBody, _ := ioutil.ReadAll(r.Body)
var employee Employee
json.Unmarshal(reqBody, &employee)
Employees = append(Employees, employee)
json.NewEncoder(w).Encode(employee)
}
func deleteEmployee(w http.ResponseWriter, r *http.Request) {
vars := mux.Vars(r)
id := vars["id"]
for index, employee := range Employees {
if employee.Id == id {
Employees = append(Employees[:index], Employees[index+1:]...)
}
}
}
func apiRequests() {
route := mux.NewRouter().StrictSlash(true)
route.HandleFunc("/", homePage)
route.HandleFunc("/employees", getAllEmployees)
route.HandleFunc("/employee", createEmployee).Methods("POST")
route.HandleFunc("/employee/{id}", deleteEmployee).Methods("DELETE")
route.HandleFunc("/employee/{id}", getEmployee)
log.Fatal(http.ListenAndServe(":3000", route))
}
func main() {
Employees = []Employee{
Employee{Id: "1", Name: "Jhon Smith", Address: "New Jersy USA", Salary: "20000"},
Employee{Id: "2", Name: "William", Address: "Wellington Newziland", Salary: "12000"},
Employee{Id: "3", Name: "Adam", Address: "London England", Salary: "15000"},
}
apiRequests()
}
Conclusion
Here in this tutorial, we have implemented HTTP methods GET, POST and DELETE to read, add and delete employee data. We have not implemented PUT method to update employee data. You can try this at your end and share your suggestion if any regarding this.
Geolocation is identification of an object in a geographic location. It includes information related to latitude, longitude, address, country code, country, zip code and more.
Geolocation data is useful for businesses to know about customers to target with relevant information. Due to importance, the businesses always looking for geolocation data of their customers. It’s not an easy task to get the accurate geolocation data, but thanks to the IPWHOIS.IO Geolocation API which provide the accurate geolocation solution.
IPWHOIS.IO provide fast and accurate gelocation data from across the world. The API is very easy to integrate and deliver results in JSON, XML or Newline format. It is totally free for up to 10,000 requests per month. IPWHOIS.IO API is used by thousands of developers all around the world.
Special features:
Free to use up to 10,000 requests per month.
Provide fast and accurate gelocation data
Provide real-time geolocation data.
It’s Simple, Fast and Powerfull.
Multilingual response.
Provides secure data.
Reasonable pricing for small to large sized websites.
Documentation with code samples in PHP, JavaScript, jQuery, Python, and more.
The API integration is very easy and quick, there are no need to do signup to get the access keys etc. We just need to make HTTP request to API with IP Address to get geolocation data.
The URL format for IPWHOIS.IO IP Geolocation API is simple:
http://ipwhois.app/json/{IP}
For example, we can make API request with following IP Address 47.9.123.84,
We can easily integrate the API with any programming language. As the IPWHOIS.IO IP does not require any registration or signup, so we just need to make HTTP request to the API to get the geolocation data.
In below example code, we will make HTTP request to http://ipwhois.app/json/{IP} using PHP CURL and pass IP Address 47.9.123.84 to get the data in JSON format.
Here in this post, we have explained how to integrate the IPWHOIS.IO API to get the accurate geolocation data. You can checkout the documentation to integrate with advance options.
Las aplicaciones de escritorio han estado infravaloradas durante mucho tiempo. Toda la atención se reservó para la web y el móvil. Si bien tanto Microsoft como Apple dieron grandes pasos para hacer evolucionar el escritorio, ha habido mucha menos energía y impulso económico detrás de esa plataforma. Hoy en día, la madurez de la web y los dispositivos móviles, así como los nuevos casos de uso en colaboración e inteligencia artificial, están impulsando el redescubrimiento de las computadoras de escritorio. Después de todo, las computadoras de escritorio aún brindan enormes ventajas de procesamiento y velocidad que solo están aumentando.
Mirando hacia atrás, había buenas razones para que las aplicaciones web dominaran el mundo de la tecnología a finales de los 90 y principios de este siglo. Eran mucho más fáciles de implementar y administrar a través de navegadores que son prácticamente ubicuos, creando así una gran oportunidad para brindar aplicaciones a muchas personas a un costo muy bajo o de forma gratuita. Las actualizaciones oportunas o inmediatas con poco esfuerzo son características que aún son difíciles de superar. Sin embargo, también se ha dado cuenta de que ciertos tipos de aplicaciones de escritorio simplemente no coincidirán en la web, al menos no en un futuro próximo.
Las aplicaciones móviles explotaron en popularidad debido al simple hecho de que muchos millones de mini-escritorios estaban en manos de prácticamente todo el mundo, desbloqueando todo tipo de casos de uso y posibilidades económicas. Curiosamente, muchos casos de uso de dispositivos móviles todavía favorecen las aplicaciones móviles nativas; las API nativas para el sistema operativo local, por supuesto, son bastante diferentes de las de los escritorios. Si bien el diseño y desarrollo de aplicaciones móviles primero dominó las discusiones de UX durante algún tiempo, la heterogeneidad de los casos de uso entre diferentes formatos exige cada vez más una aplicación que se adapte mejor a un formato y uso en particular.
Las computadoras de escritorio son relevantes debido a su rendimiento incomparable y al hecho de que el tamaño de la pantalla es importante. El sistema operativo de escritorio sigue siendo muy robusto y diferenciado, especialmente en comparación con la web. El lugar más fácil para ver la diferenciación es el juego. Si bien los juegos web y móviles han evolucionado drásticamente, cuando se trata de juegos REALES, necesita una computadora de escritorio (o una estación de juegos dedicada). Y cuando llegamos a los juegos del mundo virtual con gráficos realistas, otras plataformas ni siquiera están cerca.
Los desarrolladores sofisticados saben desde hace mucho tiempo que los IDE de escritorio tienen capacidades muy superiores. Los IDE de estilo de editor de texto siguen siendo muy populares, pero en parte porque el desarrollo web no ha requerido el tipo de sofisticación o productividad que requieren las aplicaciones de escritorio. Como dice uno de mis MVP favoritos de Embarcadero: “Programación simplificada en la Web”. Microsoft ha hecho un buen trabajo con Visual Code, pero aún así, en comparación con RAD Studio y Visual Studio, es relativamente básico. Web UX tiene demasiadas limitaciones. Un desarrollador de alta productividad suele tener varias pantallas y relés en demasiados “sensores” y “dispositivos” para lograr la productividad. A continuación se muestra un ejemplo de RAD Studio 10.4 con varios complementos de productividad, que incluyen navegación, marcadores y depuración de múltiples subprocesos (todos disponibles de forma gratuita para los clientes de Update Sub). Eso no es fácil ni práctico de lograr con un IDE web.
RAD Studio 10.4 con varios complementos de productividad, que incluyen navegación, marcadores y depuración de múltiples subprocesos (todos disponibles de forma gratuita para los clientes de Update Sub)
Tenemos muchos ejemplos de aplicaciones de este tipo de clientes de fabricación, servicios financieros y atención médica que demuestran que el rendimiento de los equipos de escritorio supera a las aplicaciones web. Por supuesto, estas aplicaciones de escritorio no se parecen en nada a las arquitecturas simples cliente-servidor tradicionales de hace décadas y muchas tienen clientes web o móviles hermanos.
Hablando sobre la complejidad de UX, otras dos tendencias impulsarán un mayor interés en los escritorios y los casos de uso variados. Uno son las aplicaciones de colaboración. El trabajo remoto se está convirtiendo en un estándar y las aplicaciones de colaboración como Zoom son imprescindibles. Las aplicaciones de colaboración no son sencillas. Los clientes web y móviles pueden cumplir con los casos de uso básicos, pero las aplicaciones de escritorio son aún más sólidas. Como ejemplo, puede comparar la cantidad de funciones de Zoom por plataforma (lo recogí de su sitio web). Estos no están clasificados por importancia, pero los números son reveladores.
Desktop (Win & Mac)
Linux
Mobile (iOS & Android)
Web
94
87
76
37
Zoom Features by Platform
En muchos sentidos, las aplicaciones de colaboración están en su infancia, ya que los casos de uso se centraron en comunicaciones simples frente a una verdadera colaboración. Esto me lleva a la otra tendencia tecnológica importante, que es la IA y la automatización robótica. Solo podemos imaginar la cantidad de “sensores y medidores” que estarán a nuestra disposición para ayudarnos a ser más productivos. Los juegos probablemente pueden darnos una pista del tipo de interacción que puede ser posible en el lugar de trabajo.
Por supuesto, la web seguirá evolucionando. A medida que las velocidades de banda ancha aumentan drásticamente con 5G, muchas cosas pueden cambiar con las arquitecturas de las aplicaciones, pero si los juegos, el entretenimiento o las aplicaciones médicas brindan una ventana al futuro, las aplicaciones nativas de escritorio seguirán siendo importantes y pueden ser aún más importantes.
En Embarcadero y nuestros muchos socios estamos fascinados por la oportunidad de un liderazgo intelectual continuo en el espacio dinámico del desarrollo de aplicaciones. RAD Studio es la base de muchas aplicaciones de escritorio icónicas y uno de los IDE más robustos, especialmente para Windows. Por supuesto, hoy hacemos mucho más que computadoras de escritorio, pero sentimos una responsabilidad particular por esa plataforma. Con este espíritu, organizamos una Desktop First UX Summit en septiembre para proporcionar un foro para estas discusiones e invitarlo a participar.
You can take the practice course where you can review all the expressions above by listening and repeating along with the video lessons. Take the course now: 30 Essential Irregular Korean Verbs for Beginners
to help 돕다
돕다 to help
도와요. I help.
도와 주세요. Help me, please.
More examples
제가 도와 드릴게요. I will help you.
불우 이웃을 도와 주세요. Please help the neighbors in need.
친구는 서로 도와야 해요. Friends should be helping each other.
to be difficult 어렵다
어렵다 to be difficult
어려워요. It’s difficult.
이 문제 너무 어려워요. This question is so difficult.
More examples
어려운 문제 difficult question
이거 안 어려워요? Isn’t it difficult?
너무 어려워서 포기했어요. I gave up because it was too difficult.
to be easy 쉽다
쉽다 to be easy
쉬워요. It’s easy.
이거 너무 쉬워요? Is it too easy?
More examples
쉬운 시험 easy exam
이거 정말 쉬워요. This is really easy.
쉬울 줄 알았어요. I thought this would be easy.
to be cold 춥다
춥다 to be cold
추워요. It’s cold.
오늘 날씨 정말 추워요. It’s so cold today.
More examples
추운 날씨 cold weather
여기는 안 추워요. It’s not cold here.
이렇게 추운 날 어디 가요? Where are you going on this cold day?
to be hot 덥다
덥다 to be hot
더워요. It’s hot.
밖에 많이 더워요? Is it very hot outside?
More examples
더운 여름 hot summer
한국 여름은 너무 더워요. Korean summers are too hot.
방이 더우니까 에어컨 켜 주세요. Please turn on the air conditioner because it’s hot in the room.
to lie down 눕다
눕다 to lie down
누워요. I lie down.
이 침대에 누우세요. Lie down on this bed.
More examples
누웠어요? Did you lie down?
아기가 자려고 막 누웠어요. The baby just lay down to sleep.
여기 누우세요. Please lie down here.
to bake, to roast 굽다
굽다 to bake, to roast
구워요. I bake.
같이 빵 구울래요? Do you want to bake the bread together?
More examples
다 구웠어요. I’ve finished baking.
빵을 구웠는데 맛이 없어요. I baked the bread but it’s not good.
이것 좀 구워 줄래요? Could you please bake this?
to be spicy 맵다
맵다 to be spicy
매워요. It’s spicy.
매워서 못 먹겠어요. I can’t eat it because it’s too spicy.
More examples
매운 음식 spicy food
매운 거 잘 먹어요? Do you eat spicy food well?
이거 너무 매워요. This is too spicy.
to be beautiful 아름답다
아름답다 to be beautiful
아름다워요. It’s beautiful.
와, 꽃이 정말 아름다워요. Wow. the flowers are so beautiful.
More examples
아름다운 풍경 beautiful scenery
이 산은 가을 정말 아름다워요. This mountain is really beautiful in the fall.
오늘따라 더 아름다워 보이네요. You look extra beautiful today.
to be cute 귀엽다
귀엽다 to be cute
귀여워요. It’s cute.
아기가 정말 귀여워요. The baby is so cute.
More examples
귀여운 강아지 cute dog
너무 작고 귀여워요. It’s so small and cute.
너무 귀여워서 갖고 싶어요. It’s so cute that I want to keep it.
to choose 고르다
고르다 to choose
골라요. I choose.
하나만 골라요. Just pick one.
More examples
이거 골라도 돼요? Can I pick this one?
뭐 골랐어요? What did you choose?
골랐어요? Did you choose?
to be different 다르다
다르다 to be different
달라요. It’s different.
그 두 사람은 정말 달라요. Those two people are really different.
More examples
제 거랑 달라요. It’s different from mine.
상황에 따라 달라요. It is different depending on the situation.
그들과 우리는 달라요. They are different from us.
to not know 모르다
모르다 to not know
몰라요. I don’t know.
누가 했는지 저는 몰라요. I don’t know who did it.
More examples
저는 정말 몰라요. I really don’t know.
이 문제는 몰라서 못 풀었어요. I couldn’t solve this problem because I didn’t know.
몰라도 괜찮아요. It’s okay not to know.
to be fast 빠르다
빠르다 to be fast
빨라요. It’s fast.
비행기가 가장 빨라요. Airplanes are the fastest.
More examples
버스가 가장 빨라요. The bus is the fastest.
누가 더 빨라요? Who’s faster?
거기는 여기보다 2시간 더 빨라요. They’re 2 hours ahead of us.
to cut 자르다
자르다 to cut
잘라요. I cut.
이거 좀 잘라 주세요. Please cut this.
More examples
이렇게 잘랐어요. I cut it like this.
이거랑 똑같이 잘라 주세요. Please cut it exactly like this.
왜 이렇게 잘랐어요? Why did you cut it like this?
to raise, to grow 기르다
기르다 to raise, to grow
길러요. I grow it.
저는 허브를 길러요. I grow herbs.
More examples
저는 집에서 강아지를 걸러요. I raise a dog in my house.
그는 턱수염을 길러요. He has a beard.
애완동물을 길러 본 적 있어요? Have you ever raised an animal?
to be lazy 게으르다
게으르다 to be lazy
게을러요. I’m lazy.
제 남동생은 너무 게을러요. My brother is very lazy.
More examples
겨울이 되면 더 게을러 져요. When it’s winter, you become lazier.
그는 너무 게을러서 하루종일 잠만 잤어요. He is so lazy that he slept the whole day.
그 사람은 게을러서 해고 당했어요. He got fired because he was lazy.
to press 누르다
누르다 to press
눌러요. I press it.
지금 눌러요? Do I press now?
More examples
이 버튼 눌러도 돼요? Can I press this button?
구독 버튼 눌러 주세요. Please hit the subscribe button.
다시 들으시려면 1번을 눌러 주세요. If you want to listen again, press 1.
to listen, to hear 듣다
듣다 to listen, to hear
들어요. I hear.
제 말 좀 들어 보세요. Please listen to me.
More examples
그 소식 들었어요? Have you heard about the news?
잘 들어 보세요. Listen carefully.
들을 준비 됐어요? Are you ready to listen?
to walk 걷다
걷다 to walk
걸어요. I walk.
저는 학교에 걸어가요. I walk to my school.
More examples
걸을까요? Shall we walk?
걸어 왔어요? Did you walk here?
여기서 걸어서 10분 정도 걸려요. It takes about 10 minutes by walking.
to ask 묻다
묻다 to ask
물어요. I ask.
무엇이든지 물어보세요. Please ask me anything.
More examples
그런 거 물어보지 마세요. Don’t ask something like that.
물어봤어요? Have you asked?
저기에서 물어보세요. Ask there.
to load 싣다
싣다 to load
실어요. I load.
짐 다 실었어요? Have you finished loading the luggage?
More examples
차에 실은 짐 the luggage that is loaded in the car
배에 화물을 실었어요. I loaded cargo on a ship.
이것 좀 트럭에 실어 주세요. Please load this into the truck.
to recover, to get well 낫다
낫다 to recover, to get well
나아요. It gets well.
푹 쉬고 얼른 나아요. Get some rest and get well
More examples
다 나았어요? Did you fully recover?
드디어 제 팔이 다 나았어요. My arm is finally okay now.
시간이 지나면 다 나을 거예요. Time will help you get well.
to join, to connect 잇다
잇다 to join, to connect
이어요. I connect
이 두 점을 이어요. Connect the two dots.
More examples
파이프를 이었어요. I connected pipes together.
이 두 점을 이을 수 있을까요? Can we connect these two dots?
다시 한 번 더 이어 보세요. Please try to connect it once again.
to build 짓다
짓다 to build
지어요. I build.
이 집은 제가 직접 지었어요. I built this house myself.
More examples
다 지었어요? Did you finish building?
내년까지 다 지어야 해요. You should build it by next year.
그 건물은 나무로 지어 졌어요. The building was built of wood.
to pour 붓다
붓다 to pour
부어요. I pour
이제 물을 한 컵 부어요. Now, pour a cup of water.
More examples
이걸 다 부었어요? Did you pour it all?
이것만 부으면 끝나요. It will done oncebe end if you pour this.
샐러드 위에 드레싱 부어 주세요. Please pour the dressing over the salad.
to draw (a line) 긋다
긋다 to draw (a line)
그어요. I draw (a line)
여기에 선을 그어 보세요. Draw a line here.
More examples
누가 여기에 선을 그었어요? Who drew this line here?
가로로 선을 그어 주세요. Draw a horizontal line.
빨간색으로 밑줄을 그으세요. Underline it in red.
to be blue 파랗다
파랗다 to be blue
파래요. It’s blue.
하늘이 정말 파래요. The sky is really blue.
More examples
파란 하늘 blue sky
추워서 입술이 파래요. My lips are blue because it’s cold.
겨울 바다는 정말 파래요. The sea in the winter is really blue.
to be black 까맣다
까맣다 to be black
까매요. It’s black.
옷 색깔이 다 까매요. The clothes are all black.
More examples
까만 콩 black bean
방이 온통 까매요. The room is totally black.
다크 초콜렛은 정말 까매요. Dark chocolate is really dark.
to be like this 이렇다
이렇다 to be like this
이래요. It’s like this.
항상 이래요. It’s always like this.
More examples
이 시계 왜 이래요? What’s wrong with this watch?
어떻게 이래요? How can this be like this?
우리만 이래요? Is it only us like this?
In BDD you discover what software you should build through a collaborative process involving both software developers and business people. BDD also involves a lot of test automation and tools like Cucumber and SpecFlow. But what would happen if you used an Approval testing tool instead? Would that still be BDD?
I’m a big fan of Behaviour Driven Development. I think it’s an excellent way for teams to gain a good understanding of what the end-user wants and how they will use the software. I like the emphasis on whole team collaboration and building shared understanding through examples. These examples can be turned into executable scenarios, also known as acceptance tests. They then become ‘living documentation’ that stays in sync with the system and helps everyone to collaborate over the lifetime of the software.
I wrote an article about Double-Loop TDD a while back, and I was thinking about BDD again recently in the context of Approval testing. Are they compatible? The usual tools for automating scenarios as tests are SpecFlow and Cucumber which both use the Gherkin syntax. Test cases comprise ‘Given-When-Then’ steps written in natural language and backed up by automation code. My question is – could you use an Approval testing tool instead?
I recently read a couple of books by Nagy and Rose. They are about BDD and specifically how to discover good examples and then formulate them into test cases. I thought the books did a good job of clearly explaining these aspects in a way that made them accessible to everyone, not just programmers.
Nagy and Rose are planning a third book in the series which will be more technical and go into more detail on how to implement the automation. They say that you can use other test frameworks, but in their books they deal exclusively with the Gherkin format and Cucumber family of tools. What would happen if you used an Approval testing tool? Would it still be BDD or would we be doing something else? Let’s go into a little more detail about the key aspects of BDD: discovery, formulation, and automation.
Discovery
The discovery part of BDD is all about developers talking with business stakeholders about what software to build. Through a structured conversation you identify rules and examples and unanswered questions. You can use an ‘example mapping’ workshop for that discussion outlined in this blog post by Cucumber Co-founder, Matt Wynne.
Formulation
The formulation part of BDD is about turning those rules and examples of system behaviour into descriptive scenarios. Each scenario is made as intelligible as possible for business people, consistent with the other scenarios, and unambiguous about system behaviour. There’s a lot of skill involved in doing this!
Automation
The automation part of BDD is where you turn formulated scenarios into executable test cases. Even though the automation is done in a programming language, the focus is still on collaboration with the business stakeholders. Everyone is expected to be able to read and understand these executable scenarios even if they can’t read a programming language.
Double-Loop TDD
The picture shown at the start of the article from Nagy and Rose’s Discovery BDD book emphasizes the double loop nature of the BDD automation cycle. The outer loop is about building the supporting code needed to make a formulated scenario executable. Test-Driven Development fits within it as the inner loop for implementing the system that fulfills the scenarios. In my experience the inner loop of unit tests goes round within minutes, whereas the outer loop can take hours or even days.
Later in the book they have a more detailed diagram showing an example BDD process:
This diagram is more complex, so I’m not going to explain it in depth here (for a deep dive take a look at this blog post by Seb Rose, or of course read the book itself!). What I want to point out is that the ‘Develop’ and ‘Implement’ parts of this diagram are showing double-loop TDD again, with slightly more detail than before. For the purpose of comparing a BDD process, with and without Approval testing, I’ve redrawn the diagram to emphasize those parts:
How you formulate, automate, and implement with TDD will all be affected by an approval testing approach. I recently wrote an article ”How to develop new features with Approval Testing, Illustrated with the Lift Kata”. That article goes through a couple of scenarios, how I formulate them as sketches, then automate them with an approval testing tool. Based on the process described in that article I could draw it like this:
What’s different?
“Formulate” is called “Sketch” since the method of formulation is visual rather than ‘Given-When-Then’. The purpose is the same though.
“Automate” includes writing a Printer as well as the usual kind of ‘glue’ code to access functionality in your application. A Printer can print the state of the software system in a format that matches the Sketch. The printer code will also evolve as you work on the implementation.
“Implement” is a slightly modified TDD cycle. With approval tests you still work test-driven and you still refactor frequently, but other aspects may differ. You may improve the Printer and approve the output many times before being ready to show the golden master to others for review.
“Review” – this activity is supposed to ensure the executable scenario is suitable to use as living documentation, and that business people can read it. The difference here is that the artifact being reviewed is the Approved Golden Master output, not the sketch you made in the “Formulate” activity. It’s particularly important to make sure business people are involved here because the living documentation that will be kept is a different artifact from the scenario they co-created in the ‘discover’ activities.
But is this still BDD?
I’m happy to report that, yes, this is still BDD! I hope you can see the activities are not that different. Just as importantly, the BDD community is open and welcoming of diversity of practice. This article describes BDD practitioners as forming a ‘centered’ community rather than a bounded community. That means people are open to you varying the exact practices and processes of BDD so long as you uphold some common values. The really central part of BDD is the collaborative discovery process.
In this article I hope I’ve shown that using an approval testing approach upholds that collaborative discovery process. It modifies the way you do formulation, automation, and development, but in a way that retains the iterative, collaborative heart of BDD. For some kinds of system sketches and golden masters might prove to be easier for business people to understand than the more mainstream ‘Given-When-Then’ Gherkin format. In that case an approval testing tool might enable a better collaborative discovery process and propel you closer to the centre of BDD.
Conclusions
BDD is about a lot more than test automation, and Gherkin is not the only syntax you can use for that part. Approval testing is perfectly compatible with BDD. I’m happy I can both claim to be a member of the BDD community and continue to choose a testing tool that fits the context I’m working in. If you’d like to learn more about Approval testing check out this video of me pair programming with Adrian Bolboaca.
Authentication is hard. Even if you know the ins and outs of it, handling registration, login, email verification, forgotten password, secret rotation... and what not... is a tedious work.
For this reason, we use auth providers such as AWS Cognito or Auth0. But this comes with its own drawback, namely that you are at the provider's mercy when it comes to examples and tutorials. If a resource you need does not exist, you either need to contact support and wait for them (but nobody got time for that), or figure it out yourself by the good ol' trial and error method.
A couple of days ago, I had to use Auth0 with Vue.js and TypeScript. Now, Auth0 has an excellent tutorial for Vue.js, but I could not find any examples in TypeScript. So seeing no better option, I started annotating the code provided by the tutorial.
I finished it, and in this blogpost, I'll walk you through the details, so you don't have to repeat this chore.
We will follow the original Auth0 Vue tutorial structure which can be found here. To make it easier to compare the two, we'll use the exact same first-level headings as the original.
First, you'll need to set up your Auth0 application. That part is very well written in the original tutorial, and I would like to be neither repetitive nor plagiarize Auth0's content, so please go ahead and read the first section there, then come back.
Create a Sample Application
Now we already start to diverge from the Auth0 tutorial.
If you already have an existing app, make sure that typescript, vue-class-component, and vue-property-decorator are present in your package.json, as we'll use class components.
If you don't have one, let's create a sample app.
$ vue create auth0-ts-vue
When prompted, select Manually select features.
We'll need Babel, TypeScript, and Router.
The next 3 questions are about deciding whether you want to use class-style component syntax, Babel, and history mode. Hit enter for all three to answer "Yes". You might opt-out from history mode if you really want to.
It is entirely up to you if you want to use dedicated config files or not, and if you want to save this as a preset.
Grab a beverage of your preference while the dependencies are being installed.
Install the SDK
Once it's done, we need to install our auth0 dependencies.
$ cd auth0-ts-vue-example
$ npm install @auth0/auth0-spa-js
The auth0-spa-js package comes with its own type definitions, so we're all set for now.
Modify your Webpack Config
If you followed the original Auth0 tutorials configuration part, you've set up your URLs to listen at port 3000. Time to hard code this into our webpack dev-server.
Create a vue.config.js file in the root directory of your app.
This way, we don't have to specify the PORT env var when we run our app. We'd need to change it in Auth0 anyway all the time, while we're developing it.
Start the application
$ npm run serve
Leave it running so we can leverage Webpack's incremental build throughout the process.
Create an Authentication Wrapper
Have you ever created a Vue.js plugin? Well, now is the time!
The easiest way to use Auth0 in your app is to make it available on this in each of your components, just as you do with $route after you've installed Vue Router.
It would be nice if this was a separate dependency, but for the sake of simplicity, let it live inside our codebase.
Create a directory called auth inside your src dir then create the following files: index.tsauth.ts, VueAuth.ts, User.ts. The original tutorial has them all in one file. Still, in my opinion, it is easier to understand what's happening if we separate the matters a bit, and it also results in nicer type definitions too.
Our index.ts will be a simple barrel file.
export * from './auth'
auth.ts is where we define the plugin. VueAuth.ts is a wrapper Vue object around auth0-spa-js, so we can leverage the observability provided by Vue, and User.ts is a class to make its type definition nicer.
Defining our User
Let's go from the inside out and take a look at User.ts
Now, this requires a bit of explanation. The first block of fields are the one that are always present, no matter what login scheme the user used. Sub is the OpenID ID Token's Subject Identifier, which contains the authentication provider (eg. auth0 or google) and the actual user id, separated by a |. The other mandatory fields are probably self-explanatory.
Next are provider and id, which are a result of splitting sub, so they should be there, but we cannot be sure. The last are the ones that were only present when Google OAuth is used as the provider. There might be more, depending on what connections you set up and what other data you request. Or you could even code custom fields in the returned ID Token... but I digress.
Last we tell TypeScript, that we want to be able to use the bracket notation on our object by adding [key: string]: any
Our constructor takes a raw user object with similar fields but snake_cased. That's why we camelCase them and assign each of them to our User object. Once we're done, we extract the provider and the id from the subfield.
Show me the Wrapper
Time to take a look at VueAuth.ts
import { Vue, Component } from 'vue-property-decorator'
import createAuth0Client, { PopupLoginOptions, Auth0Client, RedirectLoginOptions, GetIdTokenClaimsOptions, GetTokenSilentlyOptions, GetTokenWithPopupOptions, LogoutOptions } from '@auth0/auth0-spa-js'
import { User } from './User'
export type Auth0Options = {
domain: string
clientId: string
audience?: string
[key: string]: string | undefined
}
export type RedirectCallback = (appState) => void
@Component({})
export class VueAuth extends Vue {
loading = true
isAuthenticated? = false
user?: User
auth0Client?: Auth0Client
popupOpen = false
error?: Error
async getUser () {
return new User(await this.auth0Client?.getUser())
}
/** Authenticates the user using a popup window */
async loginWithPopup (o: PopupLoginOptions) {
this.popupOpen = true
try {
await this.auth0Client?.loginWithPopup(o)
} catch (e) {
console.error(e)
this.error = e
} finally {
this.popupOpen = false
}
this.user = await this.getUser()
this.isAuthenticated = true
}
/** Authenticates the user using the redirect method */
loginWithRedirect (o: RedirectLoginOptions) {
return this.auth0Client?.loginWithRedirect(o)
}
/** Returns all the claims present in the ID token */
getIdTokenClaims (o: GetIdTokenClaimsOptions) {
return this.auth0Client?.getIdTokenClaims(o)
}
/** Returns the access token. If the token is invalid or missing, a new one is retrieved */
getTokenSilently (o: GetTokenSilentlyOptions) {
return this.auth0Client?.getTokenSilently(o)
}
/** Gets the access token using a popup window */
getTokenWithPopup (o: GetTokenWithPopupOptions) {
return this.auth0Client?.getTokenWithPopup(o)
}
/** Logs the user out and removes their session on the authorization server */
logout (o: LogoutOptions) {
return this.auth0Client?.logout(o)
}
/** Use this lifecycle method to instantiate the SDK client */
async init (onRedirectCallback: RedirectCallback, redirectUri: string, auth0Options: Auth0Options) {
// Create a new instance of the SDK client using members of the given options object
this.auth0Client = await createAuth0Client({
domain: auth0Options.domain,
client_id: auth0Options.clientId, // eslint-disable-line @typescript-eslint/camelcase
audience: auth0Options.audience,
redirect_uri: redirectUri // eslint-disable-line @typescript-eslint/camelcase
})
try {
// If the user is returning to the app after authentication..
if (
window.location.search.includes('error=') ||
(window.location.search.includes('code=') && window.location.search.includes('state='))
) {
// handle the redirect and retrieve tokens
const { appState } = await this.auth0Client?.handleRedirectCallback() ?? { appState: undefined }
// Notify subscribers that the redirect callback has happened, passing the appState
// (useful for retrieving any pre-authentication state)
onRedirectCallback(appState)
}
} catch (e) {
console.error(e)
this.error = e
} finally {
// Initialize our internal authentication state when the page is reloaded
this.isAuthenticated = await this.auth0Client?.isAuthenticated()
this.user = await this.getUser()
this.loading = false
}
}
}
It might make sense to compare this with the original tutorial.
In the original tutorial, a Vue object is created while we're creating a class to make its annotation easier. There you can find it as:
// The 'instance' is simply a Vue object
instance = new Vue({
...
})
Now let's unpack it.
First, we need to import a couple of types, including our User class.
Then we create the Auth0Options and RedirectCallback type aliases for convenience.
Instead of creating a simple Vue object, we define a Class Component. The public fields are the same as the data object in the original, whereas the static ones are the parameters passed to the plugin.
We differ in two substantial way from the original tutorial:
We have one less method: handleRedirectCallback is not used anywhere in the original, so we omitted it.
Instead of setting up the Auth0 Client in the Vue object's created hook, we use a separate method called init. Aside from that, the contents of the two are identical.
The reason for using a separate method is simple: The created hook is used in place of a constructor when it comes to Class Components, as the constructor of the class is usually called by Vue.
First, a component object is created just like when using Vue({}), passing it the data, methods, watchers, paramlist, and all the things we usually define for components. When this is done, the created hook is called. Later, when the component is actually used and rendered, the params are passed to it, and mounted, or updated.
The problem with the original one is that we cannot pass parameters to the created method. Neither can we write a proper constructor. So we need to have our own method we will call right after the object is instantiated just as it's done with created by Vue.
Let's dissect init a bit.
First, we create and auth0Client.
Then, in the try-catch block, we check if the user is returning after authentication and handle it. We check if the query params contain any signs of redirection. If they do, we call auth0Client.handleRedirectCallback, which parses the URL and either rejects with an error or resolves with and appState.
Then, we pass on the appState to onRedirectCallback. This is a function we can pass to the plugin when we install it to Vue, so we can handle the app level ramifications of a login.
For the other methods, getUser is a simple wrapper around the authClient's getUser method. We pass on the resolved promise to our User's constructor to create a nicely looking User object.
Next, there is loginWithPopup, which we won't use, as popups can be blocked by browsers. So we'll go with the redirect way, where the user is redirected to Auth0, login, then the callback URL is called by Auth0 passing information to our app in the callback URL's query.
The information in the URL is parsed by auth0Client.handleRedirectCallback which will return a Promise<RedirectCallbackResult>. The Promise will be rejected if there is an error in the authentication flow.
We have a couple of simple wrappers around the auth0Client. loginWithRedirect initiates the flow I described above, logout speaks for itself.
Finally, we set up the user and check if we're authenticated.
Let's turn this into a Plugin
Now, all we need to do is create a proper plugin.
If you take a look at Vue's documentation about plugins, you'll see that we need to create an object that exposes an install method. This method will be called when we pass the object to Vue.use and it will receive the Vue constructor and optionally... options.
In our install method, we add an $auth member to any Vue object, so the VueAuth object is available everywhere, just as vue-router is.
Let's implement the useAuth function.
/** Define a default action to perform after authentication */
const DEFAULT_REDIRECT_CALLBACK = () =>
window.history.replaceState({}, document.title, window.location.pathname)
let instance: VueAuth
/** Returns the current instance of the SDK */
export const getInstance = () => instance
/** Creates an instance of the Auth0 SDK. If one has already been created, it returns that instance */
export const useAuth0 = ({
onRedirectCallback = DEFAULT_REDIRECT_CALLBACK,
redirectUri = window.location.origin,
...options
}) => {
if (instance) return instance
// The 'instance' is simply a Vue object
instance = new VueAuth()
instance.init(onRedirectCallback, redirectUri, options as Auth0Options)
return instance
}
useAuth returns a singleton VueAtuh instance, and extracts the onRedirectCallback and redirectUri from the options object. What's left is an Auth0Options type which we'll pass on straight to the auth0Client.
You can see the init method in action we created earlier. Then VueAuth is instantiated if it hasn't been already. Above that, we also expose a getInstance function, in case we need to use it outside of a Vue component.
Let's see here the whole auth.ts for your copy-pasting convenience:
import { VueConstructor } from 'vue'
import { VueAuth, Auth0Options, RedirectCallback } from './VueAuth'
type Auth0PluginOptions = {
onRedirectCallback: RedirectCallback,
domain: string,
clientId: string,
audience?: string,
[key: string]: string | RedirectCallback | undefined
}
/** Define a default action to perform after authentication */
const DEFAULT_REDIRECT_CALLBACK = (appState) =>
window.history.replaceState({}, document.title, window.location.pathname)
let instance: VueAuth
/** Returns the current instance of the SDK */
export const getInstance = () => instance
/** Creates an instance of the Auth0 SDK. If one has already been created, it returns that instance */
export const useAuth0 = ({
onRedirectCallback = DEFAULT_REDIRECT_CALLBACK,
redirectUri = window.location.origin,
...options
}) => {
if (instance) return instance
// The 'instance' is simply a Vue object
instance = new VueAuth()
instance.init(onRedirectCallback, redirectUri, options as Auth0Options)
return instance
}
// Create a simple Vue plugin to expose the wrapper object throughout the application
export const Auth0Plugin = {
install (Vue: VueConstructor, options: Auth0PluginOptions) {
Vue.prototype.$auth = useAuth0(options)
}
}
As you can see, we're extending the Vue constructor with a new instance member. If we try to access it in a component, the TypeScript compiler will start crying as it has no idea what happened. We'll fix this a bit later down the line.
Now, the Auth0Options are the ones that are needed for the client to identify your tenant. Copy the Client ID and Domain from your Auth0 applications settings and store them in a file called auth.config.json for now. It would be nicer to inject them as environment variables through webpack, but as these are not sensitive data, we'll be just fine like that as well.
With all that said, I will not include my auth.config.json in the reference repo, only an example you'll need to fill in with your data.
Make sure to add "resolveJsonModule": true, to your tsconfig.json.
Finally, we're ready to create our main.ts.
import Vue from 'vue'
import App from './App.vue'
import router from './router'
import { Auth0Plugin } from './auth'
import { domain, clientId } from '../auth.config.json'
Vue.use(Auth0Plugin, {
domain,
clientId,
onRedirectCallback: (appState) => {
router.push(
appState && appState.targetUrl
? appState.targetUrl
: window.location.pathname
)
}
})
Vue.config.productionTip = false
new Vue({
router,
render: h => h(App)
}).$mount('#app')
The onRedirectCallback redirects the user to a protected route after they have authenticated. We'll cover this a bit later when we create an actual protected route.
Log in to the App
Time to put the authentication logic to use.
First, we'll add a Login / Logout button to Home.vue
<template>
<div class="home">
<img alt="Vue logo" src="../assets/logo.png" />
<HelloWorld msg="Welcome to Your Vue.js App" />
<!-- Check that the SDK client is not currently loading before accessing is methods -->
<div v-if="!$auth.loading">
<!-- show login when not authenticated -->
<button v-if="!$auth.isAuthenticated" @click="login">Log in</button>
<!-- show logout when authenticated -->
<button v-if="$auth.isAuthenticated" @click="logout">Log out</button>
</div>
</div>
</template>
We'll also need to update the logic in the script tag of Home
<script lang="ts">
import { Component, Vue } from 'vue-property-decorator'
import HelloWorld from '@/components/HelloWorld.vue'
@Component({
components: {
HelloWorld
}
})
export default class Home extends Vue {
login () {
this.$auth.loginWithRedirect({})
}
// Log the user out
logout () {
this.$auth.logout({
returnTo: window.location.origin
})
}
}
</script>
First, we turn the original example component into a Class Component. Second, the methods simply call the methods of VueAuth exposed by our Auth0Plugin.
But what's that? this.$auth is probably underlined in your IDE. Or if you try to compile the code you'll get the following error:
Of course, we still have to tell the compiler that we have augmented the Vue constructor with our $auth member.
Let's create a shims-auth0.d.ts file in our src directory. If you're using VSCode, you might need to reload the window to make the error go away.
Now, let's try to compile our code. If you have configured your Auth0 credentials correctly, you should be redirected to the Auth0 Universal Login page when you click Login, and back to your app against once you have logged in.
Then, you should be able to click Log out and have the application log you out.
Display the User's Profile
So far so good, but let's try to create a protected route. Displaying the user's profile seems like a prime target for that.
Let's create a file called Profile.vue in src/views.
The code should compile, so let's check if we can navigate to the Profile page and see the data. For added profit, try logging in with both Google and register a username and password. Take note of the data you get.
Secure the Profile Page
We have the route, time to make it protected. Let's create a new file in src/auth called authGaurd.ts.
import { getInstance } from './auth'
import { NavigationGuard } from 'vue-router'
export const authGuard: NavigationGuard = (to, from, next) => {
const authService = getInstance()
const fn = () => {
// Unwatch loading
unwatch && unwatch()
// If the user is authenticated, continue with the route
if (authService.isAuthenticated) {
return next()
}
// Otherwise, log in
authService.loginWithRedirect({ appState: { targetUrl: to.fullPath } })
}
// If loading has already finished, check our auth state using `fn()`
if (!authService.loading) {
return fn()
}
// Watch for the loading property to change before we check isAuthenticated
const unwatch = authService.$watch('loading', (loading: boolean) => {
if (loading === false) {
return fn()
}
})
}
First, we put auth.ts's getInstance to use. Then we create a function that checks if the user is authenticated. If they are, we call next, otherwise redirect them to login.
However, we should only call this function, if the authService is not loading, as otherwise, we still don't have any settled information about the login process.
If it is still loading, we set up a watcher for authService.loading, so when it turns true, we call our guard function. Also, please notice that we use the unwatch function returned by $watch to clean up after ourselves in fn.
I personally prefer giving descriptive names to my functions, but I only wanted to change things for the sake of either type annotation, or stability, so forgive me for keeping fn as it is to maintain parity with the JS tutorial.
Guidance with Auth0, Vue & TypeScript
Auth0 and all other authentication providers relieve us from the tedious job of handling user management ourselves. Auth0 itself excels in having a lot of educational resources for their users. The original Vue tutorial was really helpful, but seeing that TypeScript is becoming the industry standard when it comes to writing anything that should be run by JavaScript runtimes, it would be nice to see more TypeScript tutorials.
I hope this article manages to fill in a bit of this gap. If you liked what you just read, please share it with those who might need guidance with Auth0, Vue & TypeScript!
Many of you have probably used apache Jmeter for load testing before. Still, it is easy to run into the limits imposed by running it on just one machine when trying to make sure that our API will be able to serve hundreds of thousands or even millions of users.
We can get around this issue by deploying and running our tests to multiple machines in the cloud.
In this article, we will take a look at one way to distribute and run Jmeter tests along multiple droplets on DigitalOcean using Terraform, Ansible, and a little bit of bash scripting to automate the process as much as possible.
Background: During the COVID19 outbreak induced lockdowns, we’ve been tasked by a company (who builds an e-learning platform primarily for schools) to build out an infrastructure that is:
geo redundant,
supports both single and multi tenant deployments ,
can be easily scaled to serve at least 1.5 million users in huge bursts,
and runs on-premises.
To make sure the application is able to handle these requirements, we needed to set up the infrastructure, and model a reasonably high burst in requests to get an idea about the load the application and its underlying infrastructure is able to serve.
In this article, we’ll share practical advice and some of the scripts we used to automate the load-testing process using Jmeter, Terraform and Ansible.
Why do we use Jmeter for distributed load testing?
Jmeter is not my favorite tool for load testing owing mostly to the fact that scripting it is just awkward. But looking at the other tools that support distribution, it seems to be the best free one for now. K6 looks good, but right now it does not support distribution outside the paid, hosted version. Locust is another interesting one, but it's focusing too much on random test picking, and if that's not what I'm looking for, it is quite awkward to use as well - just not flexible enough right now.
So, back to Jmeter!
Terraform is infrastructure as code, which allows us to describe the resources we want to use in our deployment and configure the droplets so we have them ready for running some tests. This will, in turn, be deployed by Ansible to our cloud service provider of choice, DigitalOcean - though with some changes, you can make this work with any other provider, as well as your on-premise machines if you wish so.
Deploying the infrastructure
There will be two kinds of instances we'll use:
primary, of which we'll have one coordinating the testing,
and runners, that we can have any number of.
In the example, we're going to go with two, but we'll see that it is easy to change this when needed.
You can check the variables.tf file to see what we'll use. You can use these to customise most aspects of the deployment to fit your needs. This file holds the vars that will be plugged into the other template files - main.tf and provider.tf.
The one variable you'll need to provide to Terraform for the example setup to work is your DigitalOcean api token, that you can export like this from the terminal:
export TF_VAR_do_token=DO_TOKEN
Should you wish to change the number of test runner instances, you can do so by exporting this other environment variable:
export TF_VAR_instance_count=2
You will need to generate two ssh key pairs, one for the root user, and one for a non-privileged user. These will be used by Ansible, which uses ssh to deploy the testing infrastructure as it is agent-less. We will also use the non-privileged user when starting the tests for copying over files and executing commands on the primary node. The keys should be set up with correct permissions, otherwise, you'll just get an error.
Set the permissions to 600 or 700 like this:
chmod 600 /path/to/folder/with/keys/*
To begin, we should open a terminal in the terraform folder, and call terraform init which will prepare the working directory. Thisl needs to be called again if the configuration changes.
You can use terraform plan that will output a summary of what the current changes will look like to the console to double-check if everything is right. At the first run, it will be what the deployment will look like.
Next, we call terraform apply which will actually apply the changes according to our configuration, meaning we'll have our deployment ready when it finishes! It also generates a .tfstate file with all the information about said deployment.
If you wish to dismantle the deployment after the tests are done, you can use terraform destroy. You'll need the .tfstate file for this to work though! Without the state file, you need to delete the created droplets by hand, and also remove the ssh key that has been added to DigitalOcean.
Running the Jmeter tests
The shell script we are going to use for running the tests is for convenience - it consists of copying the test file to our primary node, cleaning up files from previous runs, running the tests, and then fetching the results.
#!/bin/bash
set -e
# Argument parsing, with options for long and short names
for i in "$@"
do
case $i in
-o=*|--out-file=*)
# i#*= This removes the shortest substring ending with
# '=' from the value of variable i - leaving us with just the
# value of the argument (i is argument=value)
OUTDIR="${i#*=}"
shift
;;
-f=*|--test-file=*)
TESTFILE="${i#*=}"
shift
;;
-i=*|--identity-file=*)
IDENTITYFILE="${i#*=}"
shift
;;
-p=*|--primary-ip=*)
PRIMARY="${i#*=}"
shift
;;
esac
done
# Check if we got all the arguments we'll need
if [ -z "$TESTFILE" ] || [ ! -f "$TESTFILE" ]; then
echo "Please provide a test file"
exit 1
fi
if [ -z "$OUTDIR" ]; then
echo "Please provide a result destination directory"
exit 1
fi
if [ -z "$IDENTITYFILE" ]; then
echo "Please provide an identity file for ssh access"
exit 1
fi
if [ -z "$PRIMARY" ]; then
PRIMARY=$(terraform output primary_address)
fi
# Copy the test file to the primary node
scp -i "$IDENTITYFILE" -o IdentitiesOnly=yes -oStrictHostKeyChecking=no "$TESTFILE" "runner@$PRIMARY:/home/runner/jmeter/test.jmx"
# Remove files from previous runs if any, then run the current test
ssh -i "$IDENTITYFILE" -o IdentitiesOnly=yes -oStrictHostKeyChecking=no "runner@$PRIMARY" << "EOF"
rm -rf /home/runner/jmeter/result
rm -f /home/runner/jmeter/result.log
cd jmeter/bin ; ./jmeter -n -r -t ../test.jmx -l ../result.log -e -o ../result -Djava.rmi.server.hostname=$(hostname -I | awk ' {print $1}')
EOF
# Get the results
scp -r -i "$IDENTITYFILE" -o IdentitiesOnly=yes -oStrictHostKeyChecking=no "runner@$PRIMARY":/home/runner/jmeter/result "$OUTDIR"
Running the script will require the path to the non-root ssh key. The call will look something like this:
You can also supply the IP of the primary node using -p= or --primary-ip= in case you don't have access to the .tfstate file. Otherwise, the script will ask terraform for the IP.
Jmeter will then take care of distributing the tests across the runner nodes, and it will aggregate the data when they finish. The only thing we need to keep in mind is that the number of users we set for our test to use will not be split but will be multiplied. As an example, if you set the user count to 100, each runner node will then run the tests with 100 users.
And that's how you can use Terraform and Ansible to run your distributed Jmeter tests on DigitalOcean!
Check this page for more on string manipulation in bash.
Looking for DevOps & Infra Experts?
In case you’re looking for expertise in infrastructure related matters, I’d recommend to read our articles and ebooks on the topic, and to check out our various service pages:
사랑, 기억에 머물다 is a web drama about mind-reading technology and love, and features a member of the popular Korean girl group Apink (에이핑크). You can learn very useful real-life Korean expressions through engaging episodes!
This is the part 2 of the drama course containing 11 lessons.
사랑, 기억에 머물다 is a web drama about mind-reading technology and love, and features a member of the popular Korean girl group Apink (에이핑크). You can learn very useful real-life Korean expressions through engaging episodes!
This is the part 1 of the drama course containing 11 lessons.
The main phrase of this lesson: 그래. 다 그런 거지 뭐. 겉과 속이 다른 세상. 참…
그래. = Yes. / That’s right. 다 그런 거지. = Sure, it’s like that. 뭐 = adds a sense of giving up or low expectations 겉과 속이 다른 세상 = a world where people are different inside and outside 참 = sigh
Lesson 2
The main phrase of this lesson: 네가 이렇게 미안해 할 줄 알고, 내가 네 이름으로 좀 먹었다.
네가 = you 이렇게 = like this 미안해 하다 = to feel apologetic 네 이름으로 = under your name 좀 = a little bit
Lesson 3
The main phrase of this lesson: 나 여기 단골인데… 왜 몰랐지?
나 = I 여기 = here, this place 단골 = regular customer 왜 = why 모르다 = to not know
Lesson 4
The main phrase of this lesson: 지금 입맛이 별로 없어. 그리고 이게 뭐 하루 이틀이냐.
지금 = now 입맛 = appetite 별로 = not particularly 그리고 = and 하루 = one day 이틀 = two days
Lesson 5
The main phrases of this lesson: 샤워하고 나오잖아? 뽀샤시해 가지고 기가 막히게 예뻐요. 청순하고.
샤워하다 = to take a shower 뽀샤시하다 = to have perfect skin, to glow 기가 막히게 = amazingly, breathtakingly 예쁘다 = to be pretty 청순하다 = to have innocent beauty
Lesson 6
The main phrase of this lesson: 뭐야? 추운데 왜 길거리에서 저러고 있는 거야?
뭐야? = What is it? 춥다 = to be cold 왜 = why 길거리 = street 저러고 = 저렇게 하고 = like that
Lesson 7
The main phrase of this lesson: 분명 있었는데… 내가 잘못 본 건가?
분명 = definitely, certainly 있다 = to exist, to be there 있었는데 = they were there but 잘못 = incorrectly 보다 = to see
Lesson 8
The main phrase of this lesson: 이야, 여기 다녀? 우리 진짜 인연인가 보네.
이야. = Wow. 여기 = here 다니다 = to go regularly, to attend 우리 = we 진짜 = really 인연 = destiny, meant to be together, meant to meet -인가 보네 = I guess it is, it seems to be
Lesson 9
The main phrases of this lesson: 인상착의는 기억나세요? / 그게 밤이라… 또 너무 어두워 가지고… 잘….
인상 = what someone looks like 착의 = what someone is wearing 인상착의 = appearance 기억나다 = to remember 그게 = The thing is… 밤 = night -이라(서) = because 또 = also 너무 = very, too 어둡다 = to be dark 잘 = well, skillfully
Lesson 10
The main phrase of this lesson: 야, 나 오늘 짤리는 거 아닌가 조마조마했거든. 엄청 잘 됐지?
야 = hey 오늘 = today 짤리다 (잘리다) = to be cut, to be fired -는 거 아닌가 = perhaps I will, maybe I will 조마조마하다 = to be nervous 엄청 = very, really 잘 되다 = to go well, to be good news
Lesson 11
The main phrase of this lesson: 아주 잘 하는 짓이다.
아주 = very 잘 = well 하다 = do 짓 = deed, doing, action
First of all, you are absolutely at the right place to do. And second of all, we have a video for you that will save you a lot of time.
The general sentence structure and order order in the Korean language can be quite different from English, because verbs generally come at the very end.
Understanding how Korean sentences are formed in general will help you navigate better through the Korean language. Once you are done with the video lesson above, you are ready to start learning more Korean! Even if you have no prior knowledge of the Korean language, we have everything you need to learn to speak Korean. You can start learning with our paper books, e-books or online courses that you can take on the go!
Is there a shortcut to becoming a 10x developer? Is there some magical secret that — if you only knew it — would unlock a whole new world of software development mastery and productivity for you?
This is where the doubters are thinking “There are no shortcuts! Everybody needs to practice to get good!” And that’s true enough, but what are the experts of software productivity practicing, and is there one key thing that can make a huge difference?
Yes! There is!
But even if I share it with you — even if I give it away and spell it out for you in detail— it might take you 10 years to grow into and fully appreciate the simplicity of it.
At least, that’s what happened to me. It was spelled out to me in plain English by my high school programming teacher. I was walked step-by-step through the process of applying it using some example code. And it didn’t really sink in until 10 years later. But now, with the benefit of experience, it’s a lesson I appreciate profoundly, and even though I know it’s a lesson you can’t truly appreciate at first glance — I’m going to share it with you.
This secret is a key difference between average productivity and 10x productivity. Using the leverage that this secret provides, you can be orders of magnitude more efficient.
You can write code that is more reusable and less likely to break when new requirements are introduced and things change in the surrounding code.
The secret to being 10x more productive is to gain a mastery of abstraction. A lot of developers treat “abstraction” like it’s a dirty word. You’ll hear (otherwise good) advice like, “don’t abstract too early” or Zen of Python’s famous “explicit is better than implicit,” implying that concrete is better than abstract. And all of that is good advice — depending on context.
But modern apps use a huge amount of code. If you printed out the source code of modern top 10 applications, those stacks of paper would compete with the height of skyscrapers, and software costs a lot of money to maintain. The more code you create, the more it costs.
Abstraction is the Key to Simple Code
The right abstractions can make code more readable, adaptable, and maintainable by hiding details which are unimportant for the current context, and reducing the amount of code required to do the same work — often by orders of magnitude.
“Simplicity is about subtracting the obvious and adding the meaningful.”
Abstraction is not a 1-way street. It’s really formed by two complementary concepts:
Generalization — Removing the repeated parts (the obvious) and hiding them behind an abstraction.
Specialization — Applying the abstraction for a particular use-case, adding just what needs to be different (the meaningful).
Consider the following code:
There’s nothing inherently wrong with the code, but it contains a lot of details that may not be important for this particular application.
It includes details of the container/transport data structure being used (the array), meaning that it will only work with arrays. It contains a state shape dependency.
It includes the iteration logic, meaning that if you need other operations which also need to visit each element in the data structure, you’d need to repeat very similar iteration logic in that code, as well. It forces repetition which could violate DRY (Don’t Repeat Yourself).
It includes an explicit assignment, rather than declaratively describing the operation to be performed. It’s verbose.
None of that is necessary. All of it can be hidden behind an abstraction. In this case, an abstraction that is so universal, it has transformed the way modern applications are built and reduced the number of explicit for-loops we need to write.
“If you touch one thing with deep awareness, you touch everything.”
~ Thich Nhat Hanh
Using the map operation, we can reduce the code to a one-liner by removing the obvious (the parts we’re likely to repeat in similar code), and focusing on the meaningful (just the stuff that needs to be different for our use case:
Junior developers think they have to write a lot of code to produce a lot of value.
Senior developers understand the value of the code that nobody needed to write.
Imagine being the coder who popularized the use of the map operation in programming languages like JavaScript. Map abstracts away details such as the type of data you’re mapping over, the type of data structure containing the data, and the iteration logic required to enumerate each data node in the data structure. It’s improved the efficiency of every app I’ve built in the past decade.
Jeremy Ashkenas made several such operations popular in JavaScript, and paved the way for many of the great syntax shortcuts we take for granted now in JavaScript by pioneering their use in CoffeeScript. He made Underscore, which spawned Lodash (still the most popular functional programming utility belt in JavaScript), and Backbone, which popularized MVC architecture in JavaScript and set the stage for Angular and React.
John Resig made jQuery, which was so popular and influential, it formed the biggest collection of reusable, encapsulated JavaScript modules (jQuery plugins) until standard Node modules and ES6 modules appeared several years later. jQuery’s selector API was so influential it forms the basis of today’s DOM selection APIs. I still benefit on a nearly daily basis from jQuery’s selection API when I unit test React components.
The right abstractions are powerful levers that can impact productivity dramatically. Abstraction is not a dirty word. Modules, functions, variables, classes — all of these are forms of abstraction and the entire reason any of them exist is to make abstraction and composition of abstractions easier.
You can’t build complex software without abstractions. Even assembly language uses abstractions — names for instructions, variables for memory addresses, code points to jump to for subroutines (like function calls), etc. Modern software is a layer cake of useful abstractions, and those layers give you leverage.
“Give me a lever long enough and a fulcrum on which to place it, and I shall move the world.”
~ Archimedes
The key to simplicity: The secret we’re after — is how to reduce the mountain of code we’re producing — how to get a lot more done with a lot less. When you master that, you will be a 10x programmer. I guarantee it.
Eric Elliott is a tech product and platform advisor, author of “Composing Software”, cofounder of EricElliottJS.com and DevAnywhere.io, and dev team mentor. He has contributed to software experiences for Adobe Systems, Zumba Fitness,The Wall Street Journal,ESPN,BBC, and top recording artists including Usher, Frank Ocean, Metallica, and many more.
He enjoys a remote lifestyle with the most beautiful woman in the world.
The Secret of Simple Code was originally published in JavaScript Scene on Medium, where people are continuing the conversation by highlighting and responding to this story.
Learn all the official rules of Korean pronunciation
Learn the correct pronunciation of words or phrases that Korean learners often make mistakes when pronouncing
Practice along with a native speaker to improve your pronunciation
Take quizzes designed to advance your listening and help you differentiate sounds
Trailer + Sample lesson
Lecturers
Cassie CasperKyung-hwa Sun
Course language
English
What can you find inside the course?
25 video lessons
A PDF file of lesson notes
Customer reviews
“Hands down the best money I spent on TTMIK products. I learned so much. (Rather I had to unlearn things to relearn them.) While I may not say everything correctly 100% of the time yet since I am still memorizing all the rules and still training myself to forget what I was taught before, I have noticed I am starting to hear the sounds (or what letter they are pronounced like) correctly more often than I was before. Plus when I forget a rule while I’m studying and the word I hear sounds different from what I said I now understand why and I know there is just a rule there that I just need more practice with. It is a much better and more reassuring situation than before where someone was telling me “this is how this one word is pronounced but that is all you’re going to get for this lesson. figure the rest out for yourself.” Thank you TTMIK.”
Carrie P.
“Love these. The videos are actually enjoyable to watch and make studying fun. Honestly! It’s like lightbulbs flashing on when I learn the mysteries of when things don’t sound like they’re written. Everyone should watch this. You’ll save yourself a LOT of confusion and stress.”
Barbara B.
Table of contents
Diphthongs: Why don’t Koreans pronounce 의 as 의?
Long/Short Vowel Sounds: Are 눈(snow) and 눈(eye) pronounced differently?
Batchim: 빋 = 빗 = 빚 = 빛 = 빝 = 빟
Compound Consonants as Batchim: Should I pronounce the ㄹ or ㄱ in 읽다?
Assimilation Part 1: 닫히다 and 다치다 are pronounced exactly the same way.
Assimilation Part 2: ㄴ always becomes ㄹ when it’s with ㄹ.
Fortition Part 1: Why is 박주연 pronounced 박쭈연?
Fortition Part 2: Why is 갈 거예요 pronounced 갈 꺼예요?
ㄴ Insertion: Why isn’t 꽃잎 pronounced 꼬칲?
ㅅ Insertion: Why isn’t 나뭇잎 pronounced 나무싶?
Ending Consonant Sounds: Ignore the romanizations.
Redux is an amazing tool if you take the time to get to know it. One of the things about Redux that commonly trips people up is that reducers must be pure functions.
A pure function is a function which:
Given same arguments, always returns the same result, and
Has no side-effects (e.g., it won’t mutate its input arguments).
The problem is that sometimes a reducer needs to make a complicated change to some input state, but you can’t just mutate the state argument without causing bugs.
The solution is a handy tool called Immer. In this video, I’ll introduce you to Immer and show you how to use it to reduce the complexity of your reducer code. With one or two small reducers, the difference is pretty subtle, but on a large project, it can significantly simplify your application code.
Here’s an example. Imagine you’re building a social network, and you need to keep track of posts that a user has liked. When they like a post, you add a like object to the user’s likes collection. That might look something like this:
Notice what we’re returning from the like.type case: Mixing bits of the payload into a nested property of the state object using the JavaScript object spread syntax: ...state and ...state.likes on lines 19–25.
After immer, you can simplify that part to a one-liner (line 21):
The produce function returns a partial application (a function which has been partially applied to its arguments) which then takes the arguments of the reducer function. You pass it a callback function which takes a draft of the state object instead of the real state object. You’re free to mutate that object as if it were any other mutable object in JavaScript. No more spreading nested properties to avoid mutating the input argument.
After your callback function runs, Immer compares the draft to the original state and then builds a new object with your changes applied, so your function feels like it’s mutating, but still behaves like a pure function. You get the best of both worlds: The simplicity of mutation with the benefits of immutability.
Next Steps
EricElliottJS.com has in-depth lessons on topics like pure functions, immutability, partial applications, and other functional and object oriented programming concepts.
Eric Elliott is a tech product and platform advisor, author of “Composing Software”, cofounder of EricElliottJS.com and DevAnywhere.io, and dev team mentor. He has contributed to software experiences for Adobe Systems, Zumba Fitness,The Wall Street Journal,ESPN,BBC, and top recording artists including Usher, Frank Ocean, Metallica, and many more.
He enjoys a remote lifestyle with the most beautiful woman in the world.
As some of you might have guessed already, I have been a bit busy with the current beta of the new Delphi 10.4 Sidney. Usually one is not allowed to talk about the beta, but I have been given permission from Embarcadero to blog about some of the new features and improvements. Please note that everything said and shown here is from a pre-release version.
As David Millington already blogged about Redesigned Code Insight and Marco Cantú shared brought some newsabout Custom Managed Records, one of my personal favorites are the enhancements related to VCL styles.
In the past years quite a couple of my customers jumped on the VCL Style train as a simple and effective way to give their applications a fresh and modern look. Interestingly, most of them opted for an individual style tailored to match their corporate design.
One of these applications makes use of TMS Scripter to extend the standard functionality with customer designed forms. The Script Designer coming with Scripter is an extremely helpful tool in that.
Unfortunately the Script Designer doesn’t play well with the VCL style. That would be only a minor problem as the script design is not done very often and then mostly by the service staff. VCL styling is not critical for that. If we only could disable styling only for the designer form.
Now we can! Delphi 10.4 allows to set different styles per form and even per control.
That exactly solves the above problem. I can even imagine other use cases for having different styles even for controls on the same form. Some styles differ only marginally and this could be used to highlight several controls on a form more prominent than others.
This is a sample application with two forms. One form uses the native Windows style and the other uses the Windows10 Green VCL style. The VCL styled form contains two frames where one uses the Windows10 Blue style.
Another enhancement is the support for High DPI styles with the key feature of having different control entries for different sizes, backed up by bitmaps for different sizes. This is a sample project on a monitor with 100% scaling:
And here the same form dragged to a monitor with 175% scaling (open in new tab to see the actual size):
The button icons are taken from a TVirtualImagelist connected to a TImageCollection as it has already been available in Delphi 10.3 Rio. The two images to the left are shown with the new TVirtualImage control, that also takes its images from the same TImageCollection dynamically scaled to the size needed.
Note: This is part of the “Composing Software” series (now a book!) on learning functional programming and compositional software techniques in JavaScriptES6+ from the ground up. Stay tuned. There’s a lot more of this to come! Buy the Book | Index | < Previous | Next >
Abstract Data Types
Not to be confused with:
Algebraic Data Types (sometimes abbreviated ADT or AlgDT). Algebraic Data Types refer to complex types in programming languages (e.g., Rust, Haskell, F#) that display some properties of specific algebraic structures. e.g., sum types and product types.
Algebraic Structures. Algebraic structures are studied and applied from abstract algebra, which, like ADTs, are also commonly specified in terms of algebraic descriptions of axioms, but applicable far outside the world of computers and code. An algebraic structure can exist that is impossible to model in software completely. For contrast, Abstract Data Types serve as a specification and guide to formally verify working software.
An Abstract Data Type (ADT) is an abstract concept defined by axioms that represent some data and operations on that data. ADTs are not defined in terms of concrete instances and do not specify the concrete data types, structures, or algorithms used in implementations. Instead, ADTs define data types only in terms of their operations, and the axioms to which those operations must adhere.
Common ADT Examples
List
Stack
Queue
Set
Map
Stream
ADTs can represent any set of operations on any kind of data. In other words, the exhaustive list of all possible ADTs is infinite for the same reason that the exhaustive list of all possible English sentences is infinite. ADTs are the abstract concept of a set of operations over unspecified data, not a specific set of concrete data types. A common misconception is that the specific examples of ADTs taught in many university courses and data structure textbooks are what ADTs are. Many such texts label the data structures “ADTs” and then skip the ADT and describe the data structures in concrete terms instead, without ever exposing the student to an actual abstract representation of the data type. Oops!
ADTs can express many useful algebraic structures, including semigroups, monoids, functors, monads, etc. The Fantasyland Specification is a useful catalog of algebraic structures described by ADTs to encourage interoperable implementations in JavaScript. Library builders can verify their implementations using the supplied axioms.
Why ADTs?
Abstract Data Types are useful because they provide a way for us to formally define reusable modules in a way that is mathematically sound, precise, and unambiguous. This allows us to share a common language to refer to an extensive vocabulary of useful software building blocks: Ideas that are useful to learn and carry with us as we move between domains, frameworks, and even programming languages.
History of ADTs
In the 1960s and early 1970s, many programmers and computer science researchers were interested in the software crisis. As Edsger Dijkstra put it in his Turing award lecture:
“The major cause of the software crisis is that the machines have become several orders of magnitude more powerful! To put it quite bluntly: as long as there were no machines, programming was no problem at all; when we had a few weak computers, programming became a mild problem, and now we have gigantic computers, programming has become an equally gigantic problem.”
The problem he refers to is that software is very complicated. A printed version of the Apollo lunar module and guidance system for NASA is about the height of a filing cabinet. That’s a lot of code. Imagine trying to read and understand every line of that.
Modern software is orders of magnitude more complicated. Facebook was roughly 62 million lines of code in 2015. If you printed 50 lines per page, you’d fill 1.24 million pages. If you stacked those pages, you’d get about 1,800 pages per foot, or 688 feet. That’s taller than San Francisco’s Millenium Tower, the tallest residential building in San Francisco at the time of this writing.
Managing software complexity is one of the primary challenges faced by virtually every software developer. In the 1960s and 1970s, they didn’t have the languages, patterns, or tools that we take for granted today. Things like linters, intellisense, and even static analysis tools were not invented yet.
Many software engineers noted that the hardware they built things on top of mostly worked. But software, more often than not, was complex, tangled, and brittle. Software was commonly:
Over budget
Late
Buggy
Missing requirements
Difficult to maintain
If only you could think about software in modular pieces, you wouldn’t need to understand the whole system to understand how to make part of the system work. That principle of software design is known as locality. To get locality, you need modules that you can understand in isolation from the rest of the system. You should be able to describe a module unambiguously without over-specifying its implementation. That’s the problem that ADTs solve.
Stretching from the 1960s almost to the present day, advancing the state of software modularity was a core concern. It was with those problems in mind that people including Barbara Liskov (the same Liskov referenced in the Liskov Substitution Principle from the SOLID OO design principles), Alan Kay, Bertrand Meyer and other legends of computer science worked on describing and specifying various tools to enable modular software, including ADTs, object-oriented programming, and design by contract, respectively.
ADTs emerged from the work of Liskov and her students on the CLU programming language between 1974 and 1975. They contributed significantly to the state of the art of software module specification — the language we use to describe the interfaces that allow software modules to interact. Formally provable interface compliance brings us significantly closer to software modularity and interoperability.
Liskov was awarded the Turing award for her work on data abstraction, fault tolerance, and distributed computing in 2008. ADTs played a significant role in that accomplishment, and today, virtually every university computer science course includes ADTs in the curriculum.
The software crisis was never entirely solved, and many of the problems described above should be familiar to any professional developer, but learning how to use tools like objects, modules, and ADTs certainly helps.
Specifications for ADTs
Several criteria can be used to judge the fitness of an ADT specification. I call these criteria FAMED, but I only invented the mnemonic. The original criteria were published by Liskov and Zilles in their famous 1975 paper, “Specification Techniques for Data Abstractions.”
Formal. Specifications must be formal. The meaning of each element in the specification must be defined in enough detail that the target audience should have a reasonably good chance of constructing a compliant implementation from the specification. It must be possible to implement an algebraic proof in code for each axiom in the specification.
Applicable. ADTs should be widely applicable. An ADT should be generally reusable for many different concrete use-cases. An ADT which describes a particular implementation in a particular language in a particular part of the code is probably over-specifying things. Instead, ADTs are best suited to describe the behavior of common data structures, library components, modules, programming language features, etc. For example, an ADT describing stack operations, or an ADT describing the behavior of a promise.
Minimal. ADT specifications should be minimal. The specification should include the interesting and widely applicable parts of the behavior and nothing more. Each behavior should be described precisely and unambiguously, but in as little specific or concrete detail as possible. Most ADT specifications should be provable using a handful of axioms.
Extensible. ADTs should be extensible. A small change in a requirement should lead to only a small change in the specification.
Declarative. Declarative specifications describe what, not how. ADTs should be described in terms of what things are, and relationship mappings between inputs and outputs, not the steps to create data structures or the specific steps each operation must carry out.
A good ADT should include:
Human readable description. ADTs can be rather terse if they are not accompanied by some human readable description. The natural language description, combined with the algebraic definitions, can act as checks on each other to clear up any mistakes in the specification or ambiguity in the reader’s understanding of it.
Definitions. Clearly define any terms used in the specification to avoid any ambiguity.
Abstract signatures. Describe the expected inputs and outputs without linking them to concrete types or data structures.
Axioms. Algebraic definitions of the axiom invariants used to prove that an implementation has satisfied the requirements of the specification.
Stack ADT Example
A stack is a Last In, First Out (LIFO) pile of items which allows users to interact with the stack by pushing a new item to the top of the stack, or popping the most recently pushed item from the top of the stack.
Stacks are commonly used in parsing, sorting, and data collation algorithms.
Definitions
a: Any type
b: Any type
item: Any type
stack(): an empty stack
stack(a): a stack of a
[item, stack]: a pair of item and stack
Abstract Signatures
Construction
The stack operation takes any number of items and returns a stack of those items. Typically, the abstract signature for a constructor is defined in terms of itself. Please don’t confuse this with a recursive function.
stack(...items) => stack(...items)
Stack Operations (operations which return a stack)
push(item, stack()) => stack(item)
pop(stack) => [item, stack]
Axioms
The stack axioms deal primarily with stack and item identity, the sequence of the stack items, and the behavior of pop when the stack is empty.
Identity
Pushing and popping have no side-effects. If you push to a stack and immediately pop from the same stack, the stack should be in the state it was before you pushed.
pop(push(a, stack())) = [a, stack()]
Given: push a to the stack and immediately pop from the stack
Should: return a pair of a and stack().
Sequence
Popping from the stack should respect the sequence: Last In, First Out (LIFO).
pop(push(b, push(a, stack())) = [b, stack(a)]
Given: push a to the stack, then push b to the stack, then pop from the stack
Should: return a pair of b and stack(a).
Empty
Popping from an empty stack results in an undefined item value. In concrete terms, this could be defined with a Maybe(item), Nothing, or Either. In JavaScript, it’s customary to use undefined. Popping from an empty stack should not change the stack.
pop(stack()) = [undefined, stack()]
Given: pop from an empty stack
Should: return a pair of undefined and stack().
Concrete Implementations
An abstract data type could have many concrete implementations, in different languages, libraries, frameworks, etc. Here is one implementation of the above stack ADT, using an encapsulated object, and pure functions over that object:
// remove the last item from the list and // assign it to a variable const [item] = newItems.splice(-1);
// return the pair return [item, stack(...newItems)]; }, // So we can compare stacks in our assert function toString: () => `stack(${ items.join(',') })` });
// A simple assert function which will display the results // of the axiom tests, or throw a descriptive error if an // implementation fails to satisfy an axiom. const assert = ({given, should, actual, expected}) => { const stringify = value => Array.isArray(value) ? `[${ value.map(stringify).join(',') }]` : `${ value }`;
if (actualString === expectedString) { console.log(`OK: given: ${ given } should: ${ should } actual: ${ actualString } expected: ${ expectedString } `); } else { throw new Error(`NOT OK: given ${ given } should ${ should } actual: ${ actualString } expected: ${ expectedString } `); } };
// Concrete values to pass to the functions: const a = 'a'; const b = 'b';
// Proofs assert({ given: 'push `a` to the stack and immediately pop from the stack', should: 'return a pair of `a` and `stack()`', actual: pop(push(a, stack())), expected: [a, stack()] })
assert({ given: 'push `a` to the stack, then push `b` to the stack, then pop from the stack', should: 'return a pair of `b` and `stack(a)`.', actual: pop(push(b, push(a, stack()))), expected: [b, stack(a)] });
assert({ given: 'pop from an empty stack', should: 'return a pair of undefined, stack()', actual: pop(stack()), expected: [undefined, stack()] });
Conclusion
An Abstract Data Type (ADT) is an abstract concept defined by axioms which represent some data and operations on that data.
Abstract Data Types are focused on what, not how (they’re framed declaratively, and do not specify algorithms or data structures).
Common examples include lists, stacks, sets, etc.
ADTs provide a way for us to formally define reusable modules in a way that is mathematically sound, precise, and unambiguous.
ADTs emerged from the work of Liskov and students on the CLU programming language in the 1970s.
ADTs should be FAMED. Formal, widely Applicable, Minimal, Extensible, and Declarative.
ADTs should include a human readable description, definitions, abstract signatures, and formally verifiable axioms.
Bonus tip: If you’re not sure whether or not you should encapsulate a function, ask yourself if you would include it in an ADT for your component. Remember, ADTs should be minimal, so if it’s non-essential, lacks cohesion with the other operations, or its specification is likely to change, encapsulate it.
Glossary
Axioms are mathematically sound statements which must hold true.
Mathematically sound means that each term is well defined mathematically so that it’s possible to write unambiguous and provable statements of fact based on them.
Next Steps
EricElliottJS.com features many hours of video lessons and interactive exercises on topics like this. If you like this content, please consider joining.
Eric Elliott is a tech product and platform advisor, author of “Composing Software”, cofounder of EricElliottJS.com and DevAnywhere.io, and dev team mentor. He has contributed to software experiences for Adobe Systems, Zumba Fitness,The Wall Street Journal,ESPN,BBC, and top recording artists including Usher, Frank Ocean, Metallica, and many more.
He enjoys a remote lifestyle with the most beautiful woman in the world.
Async iterators have been around in Node since version 10.0.0, and they seem to be gaining more and more traction in the community lately. In this article, we’ll discuss what Async iterators do and we'll also tackle the question of what they could be used for.
What are Async Iterators
So what are async iterators? They are practically the async versions of the previously available iterators. Async iterators can be used when we don't know the values and the end state we iterate over. Instead, we get promises that eventually resolve to the usual { value: any, done: boolean } object. We also get the for-await-of loop to help us with looping over async iterators. That is just like the for-of loop is for synchronous iterators.
const asyncIterable = [1, 2, 3];
asyncIterable[Symbol.asyncIterator] = async function*() {
for (let i = 0; i < asyncIterable.length; i++) {
yield { value: asyncIterable[i], done: false }
}
yield { done: true };
};
(async function() {
for await (const part of asyncIterable) {
console.log(part);
}
})();
The for-await-of loop will wait for every promise it receives to resolve before moving on to the next one, as opposed to a regular for-of loop.
Outside of streams, there are not a lot of constructs that support async iteration currently, but the symbol can be added to any iterable manually, as seen here.
Streams as async iterators
Async iterators are very useful when dealing with streams. Readable, writable, duplex, and transform streams all have the asyncIterator symbol out of the box.
async function printFileToConsole(path) {
try {
const readStream = fs.createReadStream(path, { encoding: 'utf-8' });
for await (const chunk of readStream) {
console.log(chunk);
}
console.log('EOF');
} catch(error) {
console.log(error);
}
}
If you write your code this way, you don't have to listen to the 'data' and 'end' events as you get every chunk by iterating, and the for-await-of loop ends with the stream itself.
Consuming paginated APIs
You can also fetch data from sources that use pagination quite easily using async iteration. To do this, we will also need a way to reconstruct the body of the response from the stream the Node https request method is giving us. We can use an async iterator here as well, as https requests and responses are streams in Node:
const https = require('https');
function homebrewFetch(url) {
return new Promise(async (resolve, reject) => {
const req = https.get(url, async function(res) {
if (res.statusCode >= 400) {
return reject(new Error(`HTTP Status: ${res.statusCode}`));
}
try {
let body = '';
/*
Instead of res.on to listen for data on the stream,
we can use for-await-of, and append the data chunk
to the rest of the response body
*/
for await (const chunk of res) {
body += chunk;
}
// Handle the case where the response don't have a body
if (!body) resolve({});
// We need to parse the body to get the json, as it is a string
const result = JSON.parse(body);
resolve(result);
} catch(error) {
reject(error)
}
});
await req;
req.end();
});
}
We are going to make our requests to the Cat API to fetch some cat pictures in batches of 10. We will also include a 7-second delay between the requests and a maximum page number of 5 to avoid overloading the cat API as that would be CATtastrophic.
function fetchCatPics({ limit, page, done }) {
return homebrewFetch(`https://api.thecatapi.com/v1/images/search?limit=${limit}&page=${page}&order=DESC`)
.then(body => ({ value: body, done }));
}
function catPics({ limit }) {
return {
[Symbol.asyncIterator]: async function*() {
let currentPage = 0;
// Stop after 5 pages
while(currentPage < 5) {
try {
const cats = await fetchCatPics({ currentPage, limit, done: false });
console.log(`Fetched ${limit} cats`);
yield cats;
currentPage ++;
} catch(error) {
console.log('There has been an error fetching all the cats!');
console.log(error);
}
}
}
};
}
(async function() {
try {
for await (let catPicPage of catPics({ limit: 10 })) {
console.log(catPicPage);
// Wait for 7 seconds between requests
await new Promise(resolve => setTimeout(resolve, 7000));
}
} catch(error) {
console.log(error);
}
})()
This way, we automatically get back a pageful of cats every 7 seconds to enjoy.
A more common approach to navigation between pages might be to implement a next and a previous method and expose these as controls:
As you can see, async iterators can be quite useful when you have pages of data to fetch or something like infinite scrolling on the UI of your application.
These features have been available in browsers for some time as well, in Chrome since version 63, in Firefox since version 57 and in Safari since version 11.1. They are, however, currently unavailable in IE and Edge.
Did you get any new ideas on what you could use async iterators for? Do you already use them in your application?
Software development is an incredibly rewarding skill that can be extremely valuable. It’s remote-work friendly, and no matter where you live in the world, if you get good enough, you can qualify for great paying work ranging from $100k/year — $200k+/year (USD). Some of the highest-paid JavaScript developers make close to $500k/year. But to qualify for those great salaries, you have to get undeniably good at what you do.
Even if you’re already a professional software developer, you need to learn how to learn to code. Choosing a career in software development is choosing a path of lifelong learning.
In my role as a mentor I’ve had a peek into the learning process of hundreds of developers. What shocked me most is how much faster some learn than others. Some with little or no coding background learn new concepts more than 10 times faster than others who may have 10+ year’s experience in the craft. The secret is, you can, too.
There are a handful of learning secrets that can put you on a rocket to mastery of the craft.
1. Code
The best way to learn to code is to code. Jump into a developmentenvironment, and write some code. If you’re reading a book or blog post and you encounter a code example, type it out in a codeeditor and try to make it work. Once you get it working, play with it. Change things up. Try to think of other ways to apply it, or other things you can do with the same technique. Playwiththecode.
Book smarts will only get you so far. The best learning will come from doing.
2. Drive
The best way to get great at something is to do it. A lot. You need to be motivated and determined to learn. One way to get motivated is to give yourself the time and patience to gain some mastery. You don’t need to be an expert right away. It’s like learning a musical instrument. You can’t sit down at a piano and immediately be the next Debussy, but you can master the C major scale in your first sitting.
Likewise, you’re not going to sit down and immediately crank out the next Instagram, TikTok or Fortnite.
As you begin to master each small lesson, you’ll realize you can do this. You can get good at this. You can start to see your goal begin to materialize, and you’ll be more motivated to drive toward that goal.
Keep at it.
3. Focus
I’ve seen a lot of developers try to master everything all at once and get nowhere, fast. Their progress slows to an excruciatingly glacial crawl rather than a gold medal sprint.
If you want to learn something quickly, you can’t have your attention scattered everywhere except where you need it. Pick one language (start with JavaScript), one framework (start with React), one book, one course, one topic, etc. Whatever you pick, focus on that one thing until you have a sufficient mastery of it before you move on to something else.
I tell people all the time, concentrate on one language full time for at least a year before you branch out and learn another language. Decades ago, it used to be that a typical software developer would actually need to learn many languages in the course of their career to stay competitive in the field.
While it’s still true that learning more than one language can teach you different ways of seeing things, and even deepen your understanding of your primary language, these days a single language (JavaScript) can get you through the majority of your career.
Tip from a hiring manager: The skills you specialize in are your most valuable skills. If you commit to being a lifelong generalist bouncing from language to language, you’ll put an artificial ceiling on your mastery and earning potential.
4. Read
Many of the most useful insights available to software developers come from books. There are lots of good YouTube videos and courses online, but books are the standard bearers of software development culture and knowledge. In particular, I’ve found the following books extremely valuable:
If you want to move a new concept from a familiar-sounding idea into long-term memory, reviewing a topic is your friend. The mistake most learners make is that they quickly read a book or a blog post, and then promptly forget what they read the next day. If you read something interesting that you want to remember, review it the next day. Test yourself. Then test yourself again the day after. And the day after. Do that 4 days in a row, and your chances of committing the learning to long-term memory increase dramatically.
6. Mix Mediums
Some people learn best by reading, others by watching videos, but if you mix it up — watch a video, then do some reading, then practice with some interactive code sessions, you’ll repeat the concepts from multiple angles, and multiple examples. You’ll naturally drill some review, and get some practice in while you’re at it.
7. Build Projects
Learning the concept doesn’t mean you’ll know how to use it in a real app. Once you’ve been coding with exercises for a few weeks, it’ll be time to build something of your own. Need an idea? Instead of the ubiquitous todo app, try implementing The Rejection App.
8. Value Principles Over Frameworks and Languages
Frameworks and APIs change fast. Software design principles are evergreen. Learn principles that translate across language barriers.
Do One Thing (DOT) — Simplified from Doug McIlroy’s “Do One Thing and Do It Well (DOTADIW)” — a function should have one job. It should not fetch data AND process data AND draw to the screen. It should only fetch data. Or only process data. Or only draw to the screen. (Time to split your React components into smaller parts!)
“Program to an interface, not an implementation.” — Gang of Four, “Design Patterns”
“Favor object composition over class inheritance.” — Gang of Four, “Design Patterns”
Avoid shared mutable state.
“Premature optimization is the root of all evil.” ~ Donald Knuth
“You Aren’t Gonna Need It (YAGNI)” — Don’t write code for something that isn’t actually required, yet.
9. Share, Document, and Mentor
“Dr. Hoenikker used to say that any scientist who couldn’t explain to an eight-year-old what he was doing was a charlatan.” ~ Kurt Vonnegut — Cat’s Cradle
Learning how to code is just part of the equation. When you’re collaborating with other developers, your code will be reviewed by other people, and they will sometimes challenge your choices. As you try to explain yourself, you may find that you didn’t understand well enough to defend your position. Practice explaining, documenting, and teaching the concepts to your coworkers and other collaborators on your projects.
10. Practice, practice, practice!
Anybody who’s ever learned an acquired skill can attest, practice is key. But to get better you can’t just practice the concepts you already know. You need to challenge yourself and extend beyond the realm of what is familiar. If you constantly practice at the edge of your current abilities, you will excel.
The book, “Peak: The New Science of Expertise” delves into the study of deliberate practice and offers a wealth of insights that you can apply in your daily life to get better at practice. I strongly recommend reading it so that you can make your practice time and side-projects more productive.
Eric Elliott is a tech product and platform advisor, author of “Composing Software”, cofounder of EricElliottJS.com and DevAnywhere.io, and dev team mentor. He has contributed to software experiences for Adobe Systems, Zumba Fitness,The Wall Street Journal,ESPN,BBC, and top recording artists including Usher, Frank Ocean, Metallica, and many more.
He enjoys a remote lifestyle with the most beautiful woman in the world.
How to Learn to Code was originally published in JavaScript Scene on Medium, where people are continuing the conversation by highlighting and responding to this story.
It’s Test-Driven Development with a twist! Developing new functionality with approval tests requires some slightly different steps, but if you’re a visual thinker like me you might just prefer it. In this blog post I’ll explain how it works.
You may be familiar with the Gilded Rose Kata. It’s the most popular exercise I have on my GitHub page. About a year ago I posted some videos demonstrating a way to solve it. I used several techniques, including ‘Approval’ testing, which is also known as ‘Golden Master’ testing. It’s an approach that’s often used to get legacy code under control. What’s perhaps less known is that you can use the same tools for new development. I’ve put together a new exercise – the ‘Lift’ kata – to help people understand how this works.
If you’ve never done the Lift Kata now might be a good time to try it out. I originally worked from this description of it, and I now have my own description and GitHub repo for those who want to try it out “approval testing style”. The first step towards solving it is to spend some time understanding the problem. I’m going to assume that most of you have been in a lift at some point, so take a few minutes to note down your understanding of how they work, and the rules that govern them. Perhaps even formulate some test cases.
I did this by sketching out some scenarios. I say ‘sketch’ and not ‘formulate’ quite deliberately! The way my mind works is quite visual, so for me it made sense to represent each floor vertically on the page, and write the name of the lift next to the floor it was on. This shows a lift system with four floors named 0, 1, 2, 3, and one lift named ‘A’, on floor 0:
This is just a snapshot of a moment in time. I then started to think about how a lift responds to people pressing the floor buttons inside. I figured that this is an important aspect to test and proceeded to sketch it out. It occurred to me that I could write a list of requested floor numbers next to the lift name, but then I noticed it was even clearer if I put a mark next to each requested. For example, if passengers request floors 2 and 3 I can sketch it like this:
The next move for this lift would be to go to floor 2 since it’s the closest requested floor. That example could be formulated as a test case sketch like this:
I can use this sketch as the first test case for TDD. I’ll need to write code for a lift with floors and requests. I’ll also need to write a ‘Printer’ that can turn a lift object into some output that looks like my sketch. I write some code for this and use the printer output in the ‘verify’ step of the test. After some work the output looks like this:
This ascii-art looks much the same as my sketch. One difference is that I wrote the floor numbers at both ends of each line. This is a trick to stop my editor from deleting what it thinks is irrelevant trailing whitespace at the ends of lines! I think it looks enough like my sketch to approve the output and store it as a ‘golden master’ for this scenario. Actually, I’ve already approved it several times as it started to look more and more like my sketch. And every time I did that I could refactor a little before adding more functionality and updating the approved file again.
I’m looking at the requirements again and realize that I haven’t modelled the lift’s doors. You can’t fulfill a request until you’ve opened the doors, and that only happens after you’ve moved to the right floor. I drew a new sketch including them, shown below. I’ve written [A] for a lift called ‘A’ with closed doors, and ]A[ for when it has open doors. I also show an intermediate step when the lift is on the correct floor, but since the doors are closed the request is still active:
To get this to pass I’ll need to update all of my lift class, my printer, and my test case. After a little coding, and a few iterations of improving both the code and the printer, the test produces output that looks like this and I approve it:
Now that the test is passing, I’m fairly happy that my lift can answer requests. The next feature I was thinking about was being able to call the lift from another floor. For this I think I’ll need a new test case. Let’s say I’m standing on the third floor and the lift is on floor 1, and I press the button to go down. I can include that in my sketch by putting a ”v” next to the floor I’m on. The whole scenario might play out like this:
As before, I spend time improving both the lift code and the printer. I approve intermediate results several times and do several refactorings. At some point the output from my program looks like my sketch and I approve it:
Great stuff! My lift can now fulfill requests from passengers and answer calls from another floor. Time for a celebratory cup of tea!
I’ve shown you the first couple of test cases, but there are of course plenty more features I could implement. A system with more than one lift for a start. Plus, the lift should alert the person waiting when it arrives by making a ‘ding’ when it opens the doors. I feel my lifts would be vastly improved if they said ding! I’ll have to come up with a new sketch that includes this feature. For the moment, let’s pause and reflect on the development process I’ve used so far.
Comparing Approval Testing with ordinary TDD
If I’d been doing ordinary Test-Driven Development with unit tests I might have created a dozen tests in the same time period for the same functionality. With Approval Testing I’ve still been working incrementally and iteratively and refactoring just as frequently. I only have two test cases though. The size of the unit being tested is a little larger than with ordinary TDD, but the feedback cycle is similarly short.
Having a slightly larger unit for testing can be an advantage or a disadvantage, depending on how you view it. When the chunk of code being tested is larger, and the test uses a fairly narrow interface to access that code, it constrains the design less than it would if you instead had many finer grained tests for lower level interfaces. That means the tests don’t influence the design as strongly, and don’t need to be changed as often when you refactor.
Another difference is that I’ve invested some effort in building code that can print a lift system as an ASCII artwork, which is reused in all my tests. In classic TDD I’d have had to write assertion code that would have been different in every test.
Try it for yourself
What I’ve done isn’t exactly the same as ordinary TDD, but I think it’s a useful approach with many of the same benefits. I’ve put this exercise up on GitHub, so you can try it out for yourself. I’ve included the code for my printer so you don’t have to spend a lot of time setting that up, and can get on with developing your lift functionality. I’ve also recorded a video together with Adrian Bolboaca where I explain how the exercise works. So far I’ve translated the starting code into Java, C# and Python, and some friends have done a C++ version. (Do send me a pull request if you translate it to your favourite language.) And that’s it! You’ve seen how easy it is, so why don’t you have a try at Approval testing-style TDD for yourself?
Although I use React Hooks a lot, I don't really like them. They are solving tough problems, but with an alien API that is hard to manage at scale.
It's even harder to wire them together with a library that is based on mutable data. The two concepts don't play well together, and forcing them would cause a hot mess. Instead, the React Easy State team at RisingStack is working on alternative patterns that combine the core values of React Hooks and mutable data.
We think these core values are:
encapsulation of pure logic,
reusability,
and composability.
At the same time, we are trying to get rid of:
the strange API,
reliance on closures to store data,
and overused patterns.
This article guides you through these points and how React Easy State tackles them compared to vanilla Hooks.
TLDR: "React Easy State is a transparent reactivity based state manager for React. In practical terms: it automagically decides when to render which components without explicit orders from you."
A basic example of Hooks & React Easy State
Let's see how to set the document title with Hooks and with React Easy State.
autoEffect replaces the useEffect hook while store replaces useState, useCallback, useMemo and others. Under the hood, they are built on top of React hooks, but they utilize a significantly different API and mindset.
Reusability
What if you have to set the document’s title again for other pages? Having to repeat the same code every time would be disappointing. Luckily, Hooks were designed to capture reusable logic.
React Easy State tackles the same problem with store factories: a store factory is a function that returns a store. There are no other rules. You can use store and autoEffect - among other things - inside it.
titleStore.js:
import { store, autoEffect } from "@risingstack/react-easy-state";
export default function titleStore(initalTitle) {
const title = store({
value: initalTitle,
onChange: ev => (title.value = ev.target.value)
});
autoEffect(() => (document.title = title.value));
return title;
}
App.js:
import React from "react";
import { view } from "@risingstack/react-easy-state";
import titleStore from "./titleStore";
export default view(() => {
const title = titleStore("App title");
return <input value={title.value} onChange={title.onChange} />;
});
Things can get messy as complexity grows, especially when async code gets involved. Let's write some reusable data fetching logic! Maybe we will need it later (;
Notice how we have to use a setState with an updater function in the finally block of useFetch. Do you know why does it need special handling?
If not, try to rewrite it to setState({ ...state, loading: false }) in the CodeSandbox demo and see what happens. Then read this article to gain a deeper understanding of hooks and stale closures. Seriously, do these before you go on!
Otherwise, try to think of a good reason why the other setStates should be rewritten to use updater functions. (Keep reading for the answer.)
React Easy State version
You have probably heard that mutable data is bad (like a 1000 times) over your career. Well... closures are worse. They seem simple at a glance but they hide your data in function creation time specific “pockets” that introduce a new layer of complexity. Instead of using the latest data during a function execution you have to remember what data it “pocketed” when it was created.
Hooks are heavily relying on closures to store data, which leads to issues like the example above. Obviously, this is not a bug in the hooks API, but it is a serious cognitive overhead that gets mind-bending as your complexity grows.
React Easy State is storing its data in mutable objects instead, which has its own quirks, but it is way easier to handle in practice. You will always get what you ask for, and not some stale data from a long-gone render.
While we played with fetching data, the document title setting application turned into a massive hit with tons of feature requests. Eventually, you end up fetching related pokemon from the free pokeAPI.
Luckily you already have a data fetching hook, what a coincidence...
You don't want to refactor your existing code snippets, and it would be nicer to compose them together into more complex units. The hooks API was designed to handle this.
The fetch callback uses state and has it inside its dependency array. This means that whenever state changes fetch gets recreated, and whenever fetch gets recreated our useEffect in usePokemon kicks in ...
useEffect(() => {
fetch(name);
}, [fetch, name]);
That's bad news! We only want to refetch the pokemon when name changes. It's time to remove fetch from the dependency array.
And it breaks again... This time, it is not looping, but it always fetches the first (stale) pokemon. We keep using an old fetch that is stuck with a stale closure as its data source.
The correct solution is to modify our useFetch hook to use the setState function inside the fetch callback and remove the state dependency from its dependency array.
This mess is caused by the combination of closures and hook dependency arrays. Let's avoid both of them.
React Easy State version
React Easy State takes a different approach to composability. Stores are simple objects which can be combined by nesting them in other objects.
pokeStore.js:
import { store, autoEffect } from "@risingstack/react-easy-state";
import titleStore from "./titleStore";
import fetchStore from "./fetchStore";
const POKE_API = "https://pokeapi.co/api/v2/pokemon/";
export default function pokeStore(initialName) {
const pokemon = store({
name: titleStore(initialName),
data: fetchStore(POKE_API)
});
autoEffect(() => pokemon.data.fetch(pokemon.name.value));
return pokemon;
}
The data is stored in - always fresh - mutable objects and hook-like dependency arrays are not required because of the underlying transparent reactivity. Our original fetchStore works without any modification.
Extra Features that Hooks don't have
React Easy State is a state management library, not a hook alternative. It provides some features that Hooks can not.
Global state
You can turn any local state into a global one by moving it outside of component scope. Global state can be shared between components regardless of their relative position to each other.
pokemon.js:
import pokeStore from "./pokeStore";
// this global state can be used by any component
export default pokeStore("ditto");
Input.js:
import React from "react";
import { view } from "@risingstack/react-easy-state";
import pokemon from "./pokemon";
export default view(() => (
<input value={pokemon.name.value} onChange={pokemon.name.onChange} />
));
As you can see, old-school prop propagation and dependency injection is replaced by simply importing and using the store.
How does this affect testability, though?
Testing
Hooks encapsulate pure logic, but they can not be tested as such. You must wrap them into components and simulate user interactions to access their logic. Ideally, this is fine since you want to test everything - logic and components alike. Practically, time constraints of real-life projects won’t allow that. I usually test my logic and leave my components alone.
React Easy State store factories return simple objects, which can be tested as such.
While hooks are new primitives for function components only, store factories work regardless of where they are consumed. This is how you can use our pokeStore in a class component.
Using store factories in classes still has a few rough edges regarding autoEffect cleanup, we will address these in the coming releases.
Reality check
This article defied a lot of trending patterns, like:
hooks,
avoiding mutable data,
traditional dependency injection,
and full front-end testing.
While I think all of the above patterns need a revisit, the provided alternatives are not guaranteed to be 'better'. React Easy State has its own rough edges, and we are working hard to soften them in the coming releases.
As a starter, keep tuned for our 'Idiomatic React Easy State' docs in the near future. Consider this article as a fun and thought-provoking experiment in the meantime.
The important thing is to not stop questioning. Curiosity has its own reason for existing.
JavaScript Quick Tip — Avoid Serial Request Waterfalls
One gotcha that comes up frequently and has a serious impact on application performance is the tendency to accidentally fetch data in serial that could have been fetched in parallel. Don’t just drop an await in everywhere you use promises. Instead, think about the fetching dependencies. If you’re fetching more than one thing, make sure you fetch in parallel whenever you can. This will make a huge difference in your application’s performance.
Here’s some example code for you to play with.
Eric Elliott is a tech product and platform advisor, author of “Composing Software”, cofounder of EricElliottJS.com and DevAnywhere.io, and dev team mentor. He has contributed to software experiences for Adobe Systems, Zumba Fitness,The Wall Street Journal,ESPN,BBC, and top recording artists including Usher, Frank Ocean, Metallica, and many more.
He enjoys a remote lifestyle with the most beautiful woman in the world.
Passwords are Obsolete — How to Secure Your App and Protect Your Users
I’ve said this part before, so if you read the previous article, skip to “How to Implement Passwordless Authentication”, below. I’m posting the introduction again for those of us who are too lazy to click a link.
Managing user authentication and authorization is a very serious responsibility, and getting it wrong can cost a lot more than unauthorized access to your app. It can also compromise user privacy or lead to financial damage or identity theft for your users. Unless you are a huge company with a huge security team, you don’t want that kind of responsibility or liability for your app.
Most apps today are built with username and password authentication, and once a user is signed in, that user session can do anything it wants to do, without revalidating the user’s intention.
That security model is broken for a number of reasons:
Passwords are obsolete. If you have any doubt about that, head over to HaveIBeenPwned and plunk in your email address. Sensitive data has been stolen in many high profile data breaches impacting companies like Dropbox, Adobe, Disqus, Kickstarter, LinkedIn, Tumblr, and many, many more. If there’s a database with passwords in it, it’s only a matter of time before it gets stolen.
If an attacker can discover a password that hashes to the same hash as the one stored in your database, they’ll take that combination and try it on things like bank account websites. In many cases, even a salted, hashed password database will leak another valid username/password pair every minute or so. That’s about half a million leaked passwords per year — and that rate is doubling every few years. I wrote about this topic in 2013. The bad guys are now hashing passwords more than 10 times faster than they were then.
User Sessions Get Hijacked.User sessions are commonly hijacked after authentication, allowing attackers to exploit that user’s application resources. To prevent that, you’d need to re-authenticate the user with every request, and in the land of usernames and passwords, that would create an awkward user experience.
Upgrading Authentication
One of the coolest features of decentralized applications is the decentralized security model. Using the Ethereum blockchain ecosystem, each user gets a public and private key pair. You can sign every request with the user’s private key and verify requests with the user’s public key. Each request is uniquely authenticated, which reduces the chance of hijacking to nearly zero.
A hijacker would need the ability to sign on behalf of the user, but they can’t do that without access to the user’s private key, which is protected by hardware-level security. Using Hardware Security Modules (HSMs), we can protect private keys from exposure to the internet. Instead of sending the private key over the network, we send the messages which need signing to the private key in the HSM. The user authorizes the signature, and the signed request gets authenticated and processed. If the signature is invalid, the request gets rejected.
Additionally, those key pairs can encrypt and decrypt user data so that only the user who owns the data can read it. If an app developer chooses to let users encrypt their data, even the application can’t decrypt the data without the user’s permission. With this security model, we can put users in control of their private information.
Passwordless Authentication with Magic Links
An emerging way to bypass the need for passwords is to use magic links. A magic link is a temporary URL that expires after use, or after a specific interval of time. Magic links can be sent to your email address, an app, or a security device. Clicking the link authorizes you to sign in.
Password-only security is obsolete and dangerously insecure. Magic links eliminate the headaches of lost or stolen passwords and protect app users.
But you don’t want to try to roll your own public/private key-based magic links, or you’ll move from the frying pan into the fire. If you think keeping passwords safe is hard, don’t even think of trying to manage private keys, which, if stolen, could potentially grant access to an Ethereum wallet loaded with valuable money, collectibles, memberships, etc.
Lots of apps and wallets push that responsibility on end users. That’s like putting the key to a bank vault in somebody’s mobile phone. What if the phone gets lost, stolen, or upgraded?
In my opinion, the best way forward is to delegate key management to people who specialize in key management. One such service has launched today. It’s called Magic. It’s made by Magic Labs, a cybersecurity company who have assembled experts from companies like Docker, Apple, Google, Amazon, Yelp, Uber, Accenture, and TD Bank.
Their security model stores your user’s private keys in HSMs. An HSM is a bit like a hardware locker for private keys. The keys are protected by hardware and never leave the hardware. Keys are never exposed to the internet. Instead, messages that need to be signed by those keys are delivered to and signed on the dedicated hardware.
Imagine a bank safety deposit box. What’s inside the box is a key that can be used to authorize signatures and transactions. When you rent a safety deposit box from a bank, the contents of the box belong to you. The bank just keeps it safe for you. Using Magic is a bit like giving each of your users a dedicated safety deposit box for their key. Magic can’t access the keys, and neither can you. Keys are always in the user’s control but hosted in the cloud, so users don’t have to worry about losing them.
In other words, Magic is a non-custodial, hardware-secured key management system. It features best-in-class security and SOC 2 compliance. But the best part is that your users don’t need to know what any of that means. All they need to know how to do is enter their email and click a button.
Adding Magic to Your App
Magic has a great getting started guide and documentation to help you get set up fast and understand the basics, but we’re going to take a deeper dive and dissect the actual useMagicLink React hook that we developed for our integration with EricElliottJS.com. If you read the previous article on Fortmatic, this is going to look very similar to the useFortmatic hook we developed before, but it has a few slight changes.
Before you dive into the source code, you’ll need to understand some foundational concepts:
This file requires a few helpers. The first is a usePromise hook that holds onto a persistent reference to the magicReady promise. I promise, you want your magic to be ready before you try to use it, or your spell will backfire.
And the localStorage drop-in replacement for useState:
Last, some miscellaneous tools:
With the hook finished, our next step was to integrate the hook with our existing app. We use Redux and try to isolate logic, I/O, and state process as much as we can from our presentation components. This Higher Order Component (HOC) lets us compose our magic link logic into every page in our app in one place.
Here’s the HOC we use to compose cross-cutting concerns into every page that needs them:
Since these HOCs are so easy to create and maintain, you can create custom HOCs that omit features that aren’t needed for a particular page or add features that aren’t needed for every page. Now we can sign in or out anywhere on our site!
Conclusion
Here are my current recommendations for user authentication security:
Password-only security is obsolete and dangerously insecure. Don’t use it.
Public key cryptography gives us public/private key pairs we can use to enhance user safety significantly.
Key management is hard, both for app developers and end users.
App developers should delegate key management to security specialists.
Hardware Security Modules (HSMs) can securely store user keys in the cloud, without forcing users to know what any of this means.
Magic links eliminate the headaches of lost or stolen passwords and protect app users.
Magic provides best-in-class, non-custodial, delegated key management for your users. They’re paving the way for a more secure future for apps and users. They have set a new bar for user authentication security and user experience, and they’re currently the only authentication solution I recommend for developers of new applications.
Magic unlocks Ethereum. With Magic’s Ethereum-based public/private key pairs, we can use them to transact with Ethereum and EVM-compatible protocols and tokens, unlocking lots of capabilities that were never possible before blockchain technology was invented.
Note: You don’t have to build an Ethereum dapp to benefit from Magic’s passwordless authentication and improved user security. You can improve any app with Magic.
Take a test drive of the authentication flow on EricElliottJS.com. Our existing users have GitHub authenticated accounts that need to be linked the first time you sign in, but once you’ve linked your GitHub account once, you won’t need to do it again. Sign out and sign back in to see the Magic-only flow.
The GitHub flow requires shields to be down in Brave because of how the authentication flow is delegated. The new Magic authentication flow works like a charm with shields up. Magic’s simplified flow has lots of hidden benefits.
While you’re checking out EricElliottJS.com, browse the premium content, and explore some of our online JavaScript lessons.
Make some magic.
Eric Elliott is a tech product and platform advisor, author of “Composing Software”, cofounder of EricElliottJS.com and DevAnywhere.io, and dev team mentor. He has contributed to software experiences for Adobe Systems, Zumba Fitness,The Wall Street Journal,ESPN,BBC, and top recording artists including Usher, Frank Ocean, Metallica, and many more.
He enjoys a remote lifestyle with the most beautiful woman in the world.
Hi! About a year ago I created React Easy State - a moderately popular React state manager - which currently has around 1.8K stars, and a small but enthusiastic community forming around it. Unfortunately, I didn't have enough time to keep up with the blooming community in the last couple of months.
I’m happy to announce, that this situation ends today!
React Easy State just got moved under RisingStack and receives company support from now on. The new, enthusiastic support team without licensing changes makes me really excited about the future!
Special shoutout to my colleauges, Roland Szoke, Peter Czibik and Daniel Gergely who already contributed immensely to this project in the past weeks! <3
So what is React Easy State?
React Easy State is a transparent reactivity based state manager for React. In practical terms: it automagically decides when to render which components without explicit orders from you.
import React from 'react';
import { store, view } from 'react-easy-state';
const counter = store({
num: 0,
increment: () => counter.num++
});
// this component re-render's when counter.num changes
export default view(() => (
<button onClick={counter.increment}>{counter.num}</button>
));
Why should I use it?
Transparent reactivity is not a new idea, Vue and React's Mobx are popular libraries that implement it. So how does Easy State differ from these?
The technical edge
Historically, transparent reactivity libraries could only work with basic get and set operations. Slightly more complex use cases - like arrays or delete operations - required special handling, which killed the 'transparent vibe'. Then came Proxies, a meta-programming addition to JavaScript.
Proxies can intercept language operations which were not previously possible. They gave a huge boost to transparent reactivity libraries and both MobX and Vue embraced them since.
Instead of embracing them, Easy State's core was born out of Proxies 4 years ago, back when they were an experimental API in Chrome only. It is not carrying any bloat from the pre-proxy era and it had a long time to mature during those 4 years. This advantage is noticeable both in the minimalistic API and the stability of the library.
The everyday API consists of two functions only. The rest is automagic and contextual clues to let you focus on business logic instead of reading docs.
Handling global state in React was always a bit clumsy. With Easy State you can create both global and local state with the same API by placing the state accordingly.
Global state
import React from 'react';
import { store, view } from 'react-easy-state';
// this state is defined globally and can be shared between components
const counter = store({
num: 0,
increment: () => counter.num++
});
export default view(() => (
<button onClick={counter.increment}>{counter.num}</button>
));
Local state
import React from 'react';
import { store, view } from 'react-easy-state';
export default view(() => {
// this state is defined inside the component and it is local to the component
const counter = store({
num: 0,
increment: () => counter.num++
});
return (<button onClick={counter.increment}>{counter.num}</button>);
});
So why move under RisingStack?
How does an already stable library benefit from RisingStack's support? The core is pretty much 'done', it didn't need any commits for the last 13 months. The React port - which is React Easy State - is a different story though. You probably know that React is in the middle of an exciting transition period with hooks and the upcoming async API. These changes have to be tied together with the core in an intuitive way which is not an easy task. This is where RisingStack is a huge help.
Together we can react quickly to React changes (pun intended).
Image- Smoke Art Cubes to Smoke — MattysFlicks — (CC BY 2.0)
On remote teams, conveying team norms is a different process from in the office. Office workers can usually stroll to another desk and ask somebody a question whenever one comes up. On remote teams, your team members may work at different times, or be busy with family errands in the middle of the workday (and that’s OK — people should work when they can be most productive).
So how should remote workers communicate the team’s practices and procedures when they can’t just shout, “Hey, how do we do code reviews around here?”
Checklists.
I’ve been using checklists for years. In software development we have many of them, like the SOLID principles on object-oriented design.
But it wasn’t until I read “The Checklist Manifesto” that I realized the true power of checklists, and started making them standard operating procedures on my software teams.
The book describes a study which is particularly relevant today, because as I type this, the world is suffering from the worst global disaster since World War II: The COVID-19 pandemic. The study was conducted by Stephan Luby with support from Proctor & Gamble to test the effectiveness of anti-bacterial soap, and it delivered incredible results: The incidence of various diseases fell 35% — 52%.
But what’s really interesting about this study is that the kind of soap that was used didn’t make a big difference, and the people already had and used soap. The difference was that the study instructions included two checklists — When to wash hands:
Before preparing food or feeding it to others
After sneezing our coughing
After wiping an infant
After using a bathroom
Most people were already washing their hands after using a bathroom, but they weren’t doing it properly. The checklist also included instructions:
Use soap.
Wet both hands completely.
Rub the soap until it forms a thick lather covering both hands completely.
Wash hands for at least 20 seconds [not included in this study, but we know now it takes at least 20 seconds to break down viral bugs including Coronavirus so I’m putting it here for posterity].
Completely rinse the soap off.
It was not the particular soap used — any soap is effective. It was the checklists that prevented illness. The checklists changed behaviors and taught people how and when to properly wash their hands.
If you talk to members of my teams, you’ll discover that we create checklists for lots of things. The best checklists are:
[ ] Short enough to memorize
[ ] Only include the key points
If they get too long, conformance to the checklist drops, as people begin to see checklist points as optional suggestions.
Here are some real examples of the checklists we commonly use on our teams. A few of these (like FIRST and RAIL) are widely used and developed externally. Several others (including 5 Questions, RITE Way, Test Timing, and both CI/CD lists) were developed by me, but inspired by common industry best practices:
Code Review Checklist
Before merging a pull request, check that the following have been considered:
[ ] PR is small enough (otherwise, break it up)
[ ] Code is readable
[ ] Code is tested
[ ] The features are documented
[ ] Files are located and named correctly
[ ] Error states are properly handled
[ ] Bonus: Screenshots/screencast demo included
Code Test Checklist (RITE Way)
In quality software, developers must deliver tests which automatically prove that the code works. To test the RITE Way, each test should be:
[ ] Readable
[ ] Isolated (units and tests)/Integrated (for integration tests)
[ ] Thorough
[ ] Explicit
Test Timing Checklist
[ ] Unit tests run in under 10 seconds
[ ] Functional tests should run in under 10 minutes
[ ] CI/CD checks should run in under 10 minutes
5 Questions Every Unit Test Should Answer
[ ] What is the component under test?
[ ] What is its expected behavior (in human readable form)?
[ ] What is its actual output?
[ ] What is its expected output?
[ ] How do you reproduce a test failure? (Double check that the answers to the above answer this question).
Which is unsurprisingly similar to the bug report checklist, because all failing unit tests should be good bug reports.
Bug Report Checklist
Each bug report should include:
[ ] Description (including location)
[ ] Expected output
[ ] Actual output
[ ] Instructions to reproduce
[ ] Environment (browser/OS versions, extensions)
[ ] Bonus: Screenshot/screencast demonstrating the bug
Component Checklist (FIRST)
Components should follow the FIRST principles:
[ ] Focused
[ ] Independent
[ ] Reusable
[ ] Small
[ ] Testable
Software User Interface (UI) Performance Checklist (RAIL)
Software UIs should conform to the RAIL performance model:
[ ] Respond to user interaction in under 100ms
[ ] Animation frames should draw in under 10ms
[ ] Idle time processes should be batched in blocks of less than 50ms
[ ] Load in under 1 second
Continuous Delivery Preparedness Checklist
[ ] A minimum of 80% of the code is covered by unit tests.
[ ] All critical user workflows are covered by functional tests.
[ ] All critical integrations are covered by integration tests.
[ ] A feature toggle system exists to toggle features on and off in the production environment. All unfinished features are toggled off by default.
CI/CD Checklist
[ ] Each commit to master must first pass through a Pull Request (PR) process. Merging must be blocked until checks pass.
[ ] Each pull request must be peer reviewed before merging into the master branch.
[ ] Each pull request must pass a full suite of automated tests configured to stop the integration process if any tests fail.
[ ] Each commit triggers its own sandboxed build for testing, demonstration and verification. The build link is added to the code review data.
[ ] After all checks pass, merging is unblocked.
[ ] The author of the pull request must OK the merge (or be personally responsible for merge).
[ ] Merging should trigger automatic production deployment of the newly integrated code. Hence, the master branch should always reflect the production product build.
Eric Elliott is the author of the books, “Composing Software” and “Programming JavaScript Applications”. As co-founder of EricElliottJS.com and DevAnywhere.io, he teaches developers essential software development skills. He builds and advises development teams for crypto projects, and has contributed to software experiences for Adobe Systems, Zumba Fitness,The Wall Street Journal,ESPN,BBC, and top recording artists including Usher, Frank Ocean, Metallica, and many more.
He enjoys a remote lifestyle with the most beautiful woman in the world.
After going through this tutorial, you’ll understand the basics of Ansible - an open-source software provisioning, configuration management, and application-deployment tool.
First, we’ll discuss the Infrastructure as Code concept, and we’ll also take a thorough look at the currently available IaC tool landscape. Then, we’ll dive deep into what is Ansible, how it works, and what are the best practices for its installation and configuration.
You’ll also learn how to automate your infrastructure with Ansible in an easy way.
Okay, let's start with understanding the IaC Concept!
What is Infrastructure as Code?
Since the dawn of complex Linux server architectures, the way of configuring servers was either by using the command line, or by using bash scripts. However, the problem with bash scripts is that they are quite difficult to read, but more importantly, using bash scripts is a completely imperative way.
When relying on bash scripts, implementation details or small differences between machine states can break the configuration process. There’s also the question of what happens if someone SSH-s into the server, configures something through the command line, then later someone would try to run a script, expecting the old state.
The script might run successfully, simply break, or things could completely go haywire. No one can tell.
To alleviate the pain caused by the drawbacks of defining our server configurations by bash scripts, we needed a declarative way to apply idempotent changes to the servers’ state, meaning that it does not matter how many times we run our script, it should always result in reaching the exact same expected state.
This is the idea behind the Infrastructure as Code (IaC) concept: handling the state of infrastructure through idempotent changes, defined with an easily readable, domain-specific language.
What are these declarative approaches?
First, Puppet was born, then came Chef. Both of them were responses to the widespread adoption of using clusters of virtual machines that need to be configured together.
Both Puppet and Chef follow the so-called “pull-based” method of configuration management. This means that you define the configuration - using their respective domain-specific language- which is stored on a server. When new machines are spun up, they need to have a configured client that pulls the configuration definitions from the server and applies it to itself.
Using their domain-specific language was definitely clearer and more self-documenting than writing bash scripts. It is also convenient that they apply the desired configuration automatically after spinning up the machines.
However, one could argue that the need for a preconfigured client makes them a bit clumsy. Also, the configuration of these clients is still quite complex, and if the master node which stores the configurations is down, all we can do is to fall back to the old command line / bash script method if we need to quickly update our servers.
To avoid a single point of failure, Ansible was created.
Ansible, like Puppet and Chef, sports a declarative, domain-specific language, but in contrast to them, Ansible follows a “push-based” method. That means that as long as you have Python installed, and you have an SSH server running on the hosts you wish to configure, you can run Ansible with no problem. We can safely say that expecting SSH connectivity from a server is definitely not inconceivable.
Long story short, Ansible gives you a way to push your declarative configuration to your machines.
Later came SaltStack. It also follows the push-based approach, but it comes with a lot of added features, and with it, a lot of added complexity both usage, and maintenance-wise.
Thus, while Ansible is definitely not the most powerful of the four most common solutions, it is hands down the easiest to get started with, and it should be sufficient to cover 99% of conceivable use-cases.
If you’re just getting started in the world of IaC, Ansible should be your starting point, so let’s stick with it for now.
Other IaC tools you should know about
While the above mentioned four (Pupper, Chef, Salt, Ansible) handles the configuration of individual machines in bulk, there are other IaC tools that can be used in conjunction with them. Let’s quickly list them for the sake of completeness, and so that you don’t get lost in the landscape.
Vagrant: It has been around for quite a while. Contrary to Puppet, Chef, Ansible, and Salt, Vagrant gives you a way to create blueprints of virtual machines. This also means that you can only create VMs using Vagrant, but you cannot modify them. So it can be a useful companion to your favorite configuration manager, to either set up their client, or SSH server, to get them started.
Terraform: Vagrant comes handy before you can use Ansible, if you maintain your own fleet of VMs. If you’re in the cloud, Terraform can be used to declaratively provision VMs, setup networks, or basically anything you can handle with the UI, API, or CLI of your favorite cloud provider. Feature support may vary, depending on the actual provider, and they mostly come with their own IaC solutions as well, but if you prefer not to be locked in to a platform, Terraform might be the best solution to go with.
Kubernetes: Container orchestration systems are considered Infrastructure as Code, as especially with Kubernetes, you have control over the internal network, containers, a lot of aspects of the actual machines, basically it’s more like an OS on it’s own right than anything. However, it requires you to have a running cluster of VMs with Kubernetes installed and configured.
All in all, you can use either Vagrant or Terraform to lay the groundwork for your fleet of VMs, then use Ansible, Puppet, Chef or Salt to handle their configuration continuously. Finally, Kubernetes can give you a way to orchestrate your services on them.
Are you looking for expert help with infrastructure related issues or project? Check out our DevOps and Infrastructure related services, or reach out to us at info@risingstack.com.
We’ve previously written a lot about Kubernetes, so this time we’ll take one step and take a look at our favorite remote configuration management tool:
What is Ansible?
Let’s take apart what we already know:
Ansible is a push-based IaC, providing a user-friendly domain-specific language so you can define your desired architecture in a declarative way.
Being push-based means that Ansible uses SSH for communicating between the machine that runs Ansible and the machines the configuration is being applied to.
The machines we wish to configure using Ansible are called managed nodes or hosts. In Ansible’s terminology, the list of hosts is called an inventory.
The machine that reads the definition files and runs Ansible to push the configuration to the hosts is called a control node.
How to Install Ansible
It is enough to install Ansible only on one machine, the control node.
Control node requirements are the following:
Python 2 (version 2.7) or Python 3 (versions 3.5 and higher) installed
Windows is not supported as a control node, but you can set it up on Windows 10 using WSL
The preferred way to install Ansible on a Mac is via pip.
pip install --user ansible
Run the following command to verify the installation:
ansible --version
Ansible Setup, Configuration, and Automation
For the purposes of this tutorial, we’ll set up a Raspberry Pi with Ansible, so even if the SD card gets corrupted, we can quickly set it up again and continue working with it.
Flash image (Raspbian)
Login with default credentials (pi/raspberry)
Change default password
Set up passwordless SSH
Install packages you want to use
With Ansible, we can automate the process.
Let’s say we have a couple of Raspberry Pis, and after installing the operating system on them, we need the following packages to be installed on all devices:
vim
wget
curl
htop
We could install these packages one by one on every device, but that would be tedious. Let Ansible do the job instead.
First, we’ll need to create a project folder.
mkdir bootstrap-raspberry && cd bootstrap-raspberry
We need a config file and a hosts file. Let’s create them.
touch ansible.cfg
touch hosts // file extension not needed
Ansible can be configured using a config file named ansible.cfg. You can find an example with all the options here.
Security risk: if you load ansible.cfg from a world-writable folder, another user could place their own config file there and run malicious code. More about that here.
The lookup order of the configuration file will be searched for in the following order:
ANSIBLE_CONFIG (environment variable if set)
ansible.cfg (in the current directory)
~/.ansible.cfg (in the home directory)
/etc/ansible/ansible.cfg
So if we have an ANSIBLE_CONFIG environment variable, Ansible will ignore all the other files(2., 3., 4.). On the other hand, if we don’t specify a config file, /etc/ansible/ansible.cfg will be used.
Now we’ll use a very simple config file with contents below:
Here we tell Ansible that we use our hosts file as an inventory and to not check host keys. Ansible has host key checking enabled by default. If a host is reinstalled and has a different key in the known_hosts file, this will result in an error message until corrected. If a host is not initially in known_hosts this will result in prompting for confirmation interactively which is not favorable if you want to automate your processes.
We list the IP address of the Raspberry Pis under the [raspberries] block and then assign variables to them.
ansible_connection: Connection type to the host. Defaults to ssh. See other connection types here
ansible_user: The user name to use when connecting to the host
ansible_ssh_password: The password to use to authenticate to the host
Creating an Ansible Playbook
Now we’re done with the configuration of Ansible. We can start setting up the tasks we would like to automate. Ansible calls the list of these tasks “playbooks”.
In our case, we want to:
Change the default password,
Add our SSH public key to authorized_keys,
Install a few packages.
Meaning, we’ll have 3 tasks in our playbook that we’ll call pi-setup.yml.
By default, Ansible will attempt to run a playbook on all hosts in parallel, but the tasks in the playbook are run serially, one after another.
Let’s take a look at our pi-setup.yml as an example:
- hosts: all
become: 'yes'
vars:
user:
- name: "pi"
password: "secret"
ssh_key: "ssh-rsa …"
packages:
- vim
- wget
- curl
- htop
tasks:
- name: Change password for default user
user:
name: '"{{ item.name }}"'
password: '"{{ item.password | password_hash('sha512') }}"'
state: present
loop:
- '"{{ user }}"'
- name: Add SSH public key
authorized_key:
user: '"{{ item.name }}"'
key: '"{{ item.ssh_key }}"'
loop:
- '"{{ user }}"'
- name: Ensure a list of packages installed
apt:
name: '"{{ packages }}"'
state: present
- name: All done!
debug:
msg: Packages have been successfully installed
This part defines fields that are related to the whole playbook:
hosts: all: Here we tell Ansible to execute this playbook on all hosts defined in our hostfile.
become: yes: Execute commands as sudo user. Ansible uses privilege escalation systems to execute tasks with root privileges or with another user’s permissions. This lets you become another user, hence the name.
vars: User defined variables. Once you’ve defined variables, you can use them in your playbooks using the Jinja2 templating system.There are other sources vars can come from, such as variables discovered from the system. These variables are called facts.
tasks: List of commands we want to execute
Let’s take another look at the first task we defined earlier without addressing the user modules’ details. Don’t fret if it’s the first time you hear the word “module” in relation to Ansible, we’ll discuss them in detail later.
tasks:
- name: Change password for default user
user:
name: '"{{ item.name }}"'
password: '"{{ item.password | password_hash('sha512') }}"'
state: present
loop:
- '"{{ user }}"'
name: Short description of the task making our playbook self-documenting.
user: The module the task at hand configures and runs. Each module is an object encapsulating a desired state. These modules can control system resources, services, files or basically anything. For example, the documentation for the user module can be found here. It is used for managing user accounts and user attributes.
loop: Loop over variables. If you want to repeat a task multiple times with different inputs, loops come in handy. Let’s say we have 100 users defined as variables and we’d like to register them. With loops, we don’t have to run the playbook 100 times, just once.
Ansible comes with a number of modules, and each module encapsulates logic for a specific task/service. The user module above defines a user and its password. It doesn’t matter if it has to be created or if it’s already present and only its password needs to be changed, Ansible will handle it for us.
Note that Ansible will only accept hashed passwords, so either you provide pre-hashed characters or - as above - use a hashing filter.
Are you looking for expert help with infrastructure related issues or project? Check out our DevOps and Infrastructure related services, or reach out to us at info@risingstack.com.
For the sake of simplicity, we stored our user’s password in our example playbook, but you should never store passwords in playbooks directly. Instead, you can use variable flags when running the playbook from CLI or use a password store such as Ansible Vault or the 1Password module .
Most modules expose a state parameter, and it is best practice to explicitly define it when it’s possible. State defines whether the module should make something present (add, start, execute) or absent (remove, stop, purge). Eg. create or remove a user, or start / stop / delete a Docker container.
Notice that the user module will be called at each iteration of the loop, passing in the current value of the user variable . The loop is not part of the module, it’s on the outer indentation level, meaning it’s task-related.
The Authorized Keys Module
The authorized_keys module adds or removes SSH authorized keys for a particular user’s account, thus enabling passwordless SSH connection.
The task above will take the specified key and adds it to the specified user’s ~/.ssh/authorized_keys file, just as you would either by hand, or using ssh-copy-id.
The Apt module
We need a new vars block for the packages to be installed.
vars:
packages:
- vim
- wget
- curl
- htop
tasks:
- name: Ensure a list of packages installed
apt:
name: '"{{ packages }}"'
state: present
The apt module manages apt packages (such as for Debian/Ubuntu). The name field can take a list of packages to be installed. Here, we define a variable to store the list of desired packages to keep the task cleaner, and this also gives us the ability to overwrite the package list with command-line arguments if we feel necessary when we apply the playbook, without editing the actual playbook.
The state field is set to be present, meaning that Ansible should install the package if it’s missing, or skip it, if it’s already present. In other words, it ensures that the package is present. It could be also set to absent (ensure that it’s not there), latest (ensure that it’s there and it’s the latest version, build-deps (ensure that it’s build dependencies are present), or fixed (attempt to correct a system with broken dependencies in place).
Let’s run our Ansible Playbook
Just to reiterate, here is the whole playbook together:
- hosts: all
become: 'yes'
vars:
user:
- name: "pi"
password: "secret"
ssh_key: "ssh-rsa …"
packages:
- vim
- wget
- curl
- htop
tasks:
- name: Change password for default user
user:
name: '"{{ item.name }}"'
password: '"{{ item.password | password_hash('sha512') }}"'
state: present
loop:
- '"{{ user }}"'
- name: Add SSH public key
authorized_key:
user: '"{{ item.name }}"'
key: '"{{ item.ssh_key }}"'
loop:
- '"{{ user }}"'
- name: Ensure a list of packages installed
apt:
name: '"{{ packages }}"'
state: present
- name: All done!
debug:
msg: Packages have been successfully installed
Now we’re ready to run the playbook:
ansible-playbook pi-setup.yml
Or we can run it with overwriting the config file:
The command-line flags used in the snippet above are:
-i (inventory): specifies the inventory. It can either be a comma-separated list as above, or an inventory file.
-e (or --extra-vars): variables can be added or overridden through this flag. In our case we are overwriting the configuration laid out in our hosts file (ansible_user, ansible_ssh_pass) and the variables user and packages that we have previously set up in our playbook.
What to use Ansible for
Of course, Ansible is not used solely for setting up home-made servers.
Ansible is used to manage VM fleets in bulk, making sure that each newly created VM has the same configuration as the others. It also makes it easy to change the configuration of the whole fleet together by applying a change to just one playbook.
But Ansible can be used for a plethora of other tasks as well. If you have just a single server running in a cloud provider, you can define its configuration in a way that others can read and use easily. You can also define maintenance playbooks as well, such as creating new users and adding the SSH key of new employees to the server, so they can log into the machine as well.
Or you can use AWX or Ansible Tower to create a GUI based Linux server management system that provides a similar experience to what Windows Servers provide.
Stay tuned and subscribe to our newsletter! You can find the subscribe box in the left column, on the top of the article.
Next time, we’ll dive deeper into an enterprise use case of Ansible with AWX.
Setting up your remote work space can be daunting. You’ll be confronted with dozens of purchasing decisions, and you may feel like you need to do a lot of research. I build and advise remote development teams as part of my regular work, and have been doing so since 2014. I’ve looked at all the microphones and headsets, furniture, and accessories, so you can gear up quickly and get right to work.
Here are my current recommendations.
Note: Purchase using these links to support more great, free content.
A Fast Computer
Remote software development is CPU and memory intensive. You’ll often be recording and transcoding video clips, screen sharing or video chatting while compiling and interacting with a web browser.
Lots of computers will struggle with those demands. You’ll start an npm install and the video will drop frames. Grab at least 16 GB of RAM for best results. 32 GB is better. A current MacBook Pro will do a great job.
I recently purchased the 16" 8-core i9. It’s a great machine. It’s lighter, smaller, and still manages to pack an extra inch of screen real-estate into the compact package. I love the new keyboard, touch bar, and touch ID, and I don’t miss the old function keys. Apple recently announced the new M1 laptops.
New MacBooks only have USB-C compatible Thunderbolt 3 ports. You’ll probably need to connect to USB 3, HDMI, and memory cards. You can get a connector hub that has all of those built-in.
Ideally, you’ll want to mount your laptop on a high stand to bring your screen and video camera up to eye level. Doing so will make you more comfortable, allow you to easily adjust your camera for video conferencing, and probably improve your health, but it will also put the built-in keyboard and trackpad out of easy reach.
In my opinion, nothing beats the Apple Magic Keyboard and Apple Magic Trackpad for productivity. Syncing is easy. There are no battery replacements to worry about. Plug them in with the included cable, and you can charge them while you work.
If you plan to do any gaming to relax when you’re not working, you’ll want a real mouse. Trackpads just won’t do it. The best mice out there are designed for gamers. Check out the Logitech G Pro Wireless.
A built-in mic is OK, but it’s not ideal for video chats. First, it’s too close to the speakers, so if the speakers are too loud, it will pick up your coworker’s voices and echo back to them, which can be very distracting. Second, it’s not very good at rejecting sound that’s farther away, so it’ll pick up all the background noise.
If you want your coworkers to really hear you clearly, pick up a better external mic. Choose a dynamic mic rather than a condenser mic. You need to speak directly into a dynamic mic for it to pick up the sound well. That’s a good thing, because it automatically filters out most of the background noise.
I’ve tried a whole lot of mics over the years, and I keep circling back to the Shure Beta 58A. This is a professional mic commonly used by professional musicians on stage, which means it’s not a USB mic. That’s OK.
You’ll need a USB audio interface. My favorite is the Scarlet 2i2. While you’re at it, pick up an adjustable boom stand for the mic. There are good ones that clamp to your desk.
The MacBook Pro monitor is great, but even a 16" monitor is a bit small. While I’m coding, I usually have a browser and terminal on one screen, and an IDE on the other. Having plenty of screen real-estate can help you be up to 20% more productive. It’s worth splurging on a nice ultra wide. While you’re at it, pick up a desk mountable, adjustable stand. My favorite is the VARIDESK Vari Monitor arm + Laptop Stand.
If your surroundings are loud, first, remember your coworkers and mute yourself when you’re not talking. Second, you may want to block out the noise for yourself. A pair of comfortable, over-ear, closed-back headphones might come in handy. It’s hard to do better than the Beyerdynamic DT 1770 Pro.
If you are in a particularly noisy environment you may want to get some cans with active noise canceling. Many of those also offer wireless connectivity. The Sennheiser Momentum 3 offers best-in-class sound and a more natural sound signature helps you avoid ear fatigue. It also offers a transparent mode so you can have conversations or hear your surroundings without taking them off.
Sometimes you might want to tune into a meeting without looking like you’re hiding in some gigantic over-ear headphones. Sennheiser has you covered with the Momentum True Wireless Earbuds. I find these far more comfortable than the ubiquitous Apple AirPods. Despite their popularity, AirPods don’t comfortably fit all ears, don’t offer different size or ear shape options, and can’t hold a candle to Sennheiser on comfort or sound quality.
If you don’t need the noise isolation, the AKG K 701 will let your ears breathe and offer more transparent, detailed, expansive sound than any of the other options here. If you’re looking for the best sound quality to enjoy some great sounding music while you work, with great comfort and build quality, these are the ones to buy.
Sick of headphones? Check out the Beats Pill+ bluetooth speaker. It’s hard to believe such great sound can come out of such a small package. Compact, loud, and plenty of punch.
For best results, you don’t want a big window with bright sunlight directly behind you. Your indoor lighting won’t be able to compete with the sun, and your face will be lost in shadow on your video calls. Instead, you want good illumination. I used to use a pair of brightness and color adjustable LED panel desk lamps which I set up on the left and right sides of my desk. Now I use desk mountable LED panels to free up desk space.
The panels are bigger, which makes it possible to position them a little further away and retain the softness. They also get brighter than the lamps, so they can more easily compete with and overpower a bright background, like a window.
Many people make the mistake of skipping the lights. Don’t make that mistake. Good lighting is an inexpensive way to dramatically improve the experience of video calls. Even an inexpensive lamp can make a dramatic difference.
You’re going to spend a lot of time sitting. You need a good chair. My favorite is the Herman Miller Aeron chair. I would say it’s the Rolls-Royce of office chairs, but it’s not about luxury. It’s about posture, comfort, and health. A cheap chair could lead to far more expensive back trouble. Besides, it looks a lot more professional than a gaming chair.
Eric Elliott is the author of the books, “Composing Software” and “Programming JavaScript Applications”. As co-founder of EricElliottJS.com and DevAnywhere.io, he teaches developers essential software development skills. He builds and advises development teams for crypto projects, and has contributed to software experiences for Adobe Systems, Zumba Fitness,The Wall Street Journal,ESPN,BBC, and top recording artists including Usher, Frank Ocean, Metallica, and many more.
He enjoys a remote lifestyle with the most beautiful woman in the world.
Gear Up for Remote Work was originally published in JavaScript Scene on Medium, where people are continuing the conversation by highlighting and responding to this story.
In the previous blog post we learned how to decorate a field of a class with attributes to adjust the Json serialization to our needs. This post is about serializing fields of type TObjectList<T> or descendants thereof.
Let’s recap the problem and the final question: The Json serializer does a good job for array of objects, but fails miserably on TObjectList<T> fields.
What can we do to make serializing generic object lists being serialized properly in both directions?
As seen in the previous post, our best bet will be some neat attributes to decorate the fields with. The JsonUTCDate attribute served well, so what about a JsonObjectList attribute registering some fancy interceptor handling the nitty-gritty details. As you might have guessed, this time it is not that easy.
To populate a generic object list, the serializer must create the individual objects. For that it needs to know the type of these objects. For arrays that is already handled inside TJSONUnMarshal.JSONToTValue by asking RTTI for the ElementType. For a generic object list we must provide that type, so we declare a generic TObjectListInterceptor:
type
TObjectListInterceptor<T: class> = class(TJSONInterceptor)
public
procedure AfterConstruction; override;
function ObjectsConverter(Data: TObject; Field: string): TListOfObjects; override;
procedure ObjectsReverter(Data: TObject; Field: string; Args: TListOfObjects); override;
end;
implementation
procedure TObjectListInterceptor<T>.AfterConstruction;
begin
inherited;
ObjectType := T;
end;
function TObjectListInterceptor<T>.ObjectsConverter(Data: TObject; Field: string): TListOfObjects;
var
I: Integer;
ctx: TRTTIContext;
list: TObjectList<T>;
begin
list := TObjectList<T>(ctx.GetType(Data.ClassType).GetField(Field).GetValue(Data).AsObject);
SetLength(Result, list.Count);
for I := 0 to list.Count - 1 do
Result[I] := list.Items[I];
end;
procedure TObjectListInterceptor<T>.ObjectsReverter(Data: TObject; Field: string; Args: TListOfObjects);
var
ctx: TRTTIContext;
list: TObjectList<T>;
obj: TObject;
begin
list := TObjectList<T>(ctx.GetType(Data.ClassType).GetField(Field).GetValue(Data).AsObject);
list.Clear;
for obj in Args do
list.Add(T(obj));
end;
The implementation doesn’t look that complicated, but I want to make clear that we never set the field value! We only use GetValue to get hands on the actual list instance. This has some consequences later, so we better remember it.
The JsonObjectList attribute takes the actual interceptor class as a parameter and registers it a s ctObjects converter and rtObjects reverter:
type
JsonObjectListAttribute = class(JsonReflectAttribute)
public
constructor Create(InterceptorType: TClass);
end;
implementation
constructor JsonObjectListAttribute.Create(InterceptorType: TClass);
begin
inherited Create(ctObjects, rtObjects, InterceptorType);
end;
Let me emphasize the difference of a rtObjects and a rtTypeObjects reverter. The corresponding methods have the following signatures:
You can see that ObjectsReverter is meant to handle the Args list (which is basically an array of TObject) by doing whatever is necessary to the Field of the Data object. Meanwhile the TypeObjectsReverter must return an instance built from the Args list, which is going to replace any existing instance present in the object field. For this to work, we need the actual type of the list – in addition to the element type.
The cast to TObjectList<T> seen above works simply because we are acting on the actual field instance with the correct type and it just happens that the called methods Count, Clear and Add wire directly on the FListHelper instance unaffected of the actual type <T>.
The choose of a rtObjects reverter requires the element type as the only generic type parameter. I will later explain why a derived TObjectList<T> cannot be decorated with a rtTypeObjects attribute, ruling out such an approach.
Now we want to make use of the new attribute and see how it works for a generic object list field of our class.
type
TMyAddressBook = class
private
[JsonObjectList(TObjectListListInterceptor<TContact>)]
FContacts: TObjectList<TContact>;
Unfortunately that doesn’t even compile! It turned out that the compiler is not able to resolve the instantiated generic type TObjectListListInterceptor<TContact> used as a parameter for the JsonObjectList attribute. We have to declare an alias for that to make it work.
type
TContactList = TIdentList<TContact>;
TContactListInterceptor = TObjectListInterceptor<TContact>;
type
TMyAddressBook = class
private
[JsonObjectList(TContactListInterceptor)]
FContacts: TContactList;
...
Note that we need at least one type keyword between the alias and its use inside the attribute.
Are we finished now?
Ehm, no! Perhaps you remember me saying “I want to make clear that we never set the field value!”. That sentence assumes that there already is an instance of TContactList present in the FContacts field.
No problem, we can create that instance in the constructor and free it in the destructor. We probably would have done that anyway. Unfortunately that isn’t enough. The standard implementation replaces all instances in the fields with nil before doing any de-serialization of an object and frees those saved field instances after the field gets a new valid instance created in between –
– unless we tell not to do so.
The attribute for skipping the destruction of the field instances is JSONOwned with parameter False. The final attribute decoration for a generic object list field now looks like this:
type
TMyAddressBook = class
private
[JSONOwned(False), JsonObjectList(TContactListInterceptor)]
FContacts: TContactList;
...
There we are! Now we are able to use generic object lists as fields while still serializing them as object arrays and vice versa. The additional declaration of an interceptor alias and the per field decoration with some attributes seems acceptable given this advantage.
Why no attribute directly for TContactList?
You remember that the TypeObjectsReverter needs to know the actual list class. A possible declaration might be TObjectListInterceptor<T: class, ListT: TObjectList<T>>, which is indeed possible to implement. The problem is the use of the attribute which would have to be like this:
Unfortunately this doesn’t compile, because TContactListInterceptor is not fully defined when used in the attribute. We need to fully define the interceptor class using TContactList before we can decorate TContactList with that attribute. If anyone comes up with a workaround, tell me.
Serializing objects to Json as well as de-serializing them with the Delphi standard libraries has been subject to many discussions. While the majority suggests to use another library or a self implemented solution, there are others who would prefer the built-in tools for a couple of reasons. Simplicity and the availability with every (decent) Delphi installation being the most mentioned ones.
The ease and elegance of a TRESTRequest.AddBody<T>(myInstance) call is hard to attain with other means. I guess it is not myself alone being tempted to make use of it. With a bit of care taken when designing the objects to serialize the results are often quite satisfying and fit the requirements. Nourishing this with some advanced techniques shown in this post may be enough to keep the benefits without the need for external code.
If we inspect the above mentioned generic AddBody method we soon find out that the serializing part is outsourced to the TJson.ObjectToJsonObject method. We make a note that the call just accepts the default TJsonOptions for the second parameter.
Without any tweaking the Json serializer takes each field of the class, where it does a pretty good job for simple (value) types, objects and arrays of both. In reality the use of object arrays as fields inside a class is rarely a valid approach. Generic object lists derived from TObjectList<T> are way more feasible and thus the preferred way to go. Unfortunately those are not handled pretty well and in most cases are they even not what a non Delphi counterparts accept or deliver – mostly they use arrays for these lists. The ultimate question now is:
What can we do to make serializing generic object lists being serialized properly in both directions?
I hate to disappoint you, but the generic object lists will be covered in the next blog post, while this one tackles an easier task – it would have been too long otherwise.
Although the Json serializing is quite resistant to extensions, there are means to achieve more than appears possible on a first look. The allies in this endeavor are attributes.
Let’s start with a simple task as a warm-up. As mentioned above, the AddBody method uses the default TJsonOptions for the ObjectToJsonObject call, which are joDateIsUTC and joDateFormatISO8601. While the date format is not a bad choice, the requirements to provide UTC dates may force some work unwanted on us. What if we could just say: Hey, this is a TDateTime field, but I want it to be converted to UTC in the Json representation. In addition there should be a null date equivalent to the ISO string “0000-00-00T00:00:00.000Z”, but the Json for that shall be a null value for these instead of that string value. To abbreviate this information we want an attribute named JsonUTCDate, so that a field declaration should look like this:
OK, this is the new attribute declaration and its implementation:
type
JsonUTCDateAttribute = class(JsonReflectAttribute)
public
constructor Create;
end;
implementation
constructor JsonUTCDateAttribute.Create;
begin
inherited Create(ctObject, rtString, TUTCDateTimeInterceptor);
end;
Pretty simple – because we delegate the actual work to a class named TUTCDateTimeInterceptor, which we will have a look at later. For the moment we notice that the JsonUTCDate attribute registers different types for the converter and the reverter. The interceptor acts as a ctObject converter, but as a rtString reverter. While we could accomplish most of the task also with a ctString converter, the demand for the null value requires us to exchange the Json string value with a Json null value. A ctString converter cannot be used for that.
The TUTCDateTimeInterceptor is derived from TJSONInterceptor and overrides the methods ObjectConverter and StringReverter according to the values given in JsonUTCDateAttribute.Create. The implementation for StringReverter is pretty straight forward based on the original implementation found in TISODateTimeInterceptor decorated with a call to a small function ToLocalTime, which takes care of the null date value (We don’t want the null date being converted to local time, do we?):
const
cNoDate = -DateDelta;
function IsNoDate(ADate: TDateTime): Boolean;
begin
Result := Round(ADate) = cNoDate;
end;
function TUTCDateTimeInterceptor.ToLocalTime(const ADateTime: TDateTime): TDateTime;
begin
Result := ADateTime;
if not IsNodate(Result) then
Result := TTimeZone.Local.ToLocalTime(Result);
end;
procedure TUTCDateTimeInterceptor.StringReverter(Data: TObject; Field, Arg: string);
var
ctx: TRTTIContext;
datetime: TDateTime;
begin
datetime := ToLocalTime(ISO8601ToDate(Arg));
ctx.GetType(Data.ClassType).GetField(Field).SetValue(Data, datetime);
end;
The ObjectConverter method is a bit more sophisticated to achieve the null value requirement:
function TUTCDateTimeInterceptor.ToUniversalTime(const ADateTime: TDateTime; const ForceDaylight: Boolean): TDateTime;
begin
Result := ADateTime;
if not IsNoDate(Result) then
Result := TTimeZone.Local.ToUniversalTime(Result, ForceDaylight);
end;
function TUTCDateTimeInterceptor.ObjectConverter(Data: TObject; Field: string): TObject;
var
ctx: TRTTIContext;
date: TDateTime;
begin
Result := nil;
date := ctx.GetType(Data.ClassType).GetField(Field).GetValue(Data).AsType<TDateTime>;
if IsNoDate(date) then Exit;
StringProxy.Value := DateToISO8601(ToUniversalTime(date));
result := StringProxy;
end;
First it takes the current date value from the field and checks for the null date, exiting with a nil result if found. A nil object will end up as a null in the Json string.
For all other dates we convert to UTC with the ToUniversalTime method. Alas, we cannot return the resulting string directly – we need to return an object. For this we use a local instance of a TStringProxy object, for which we have taken ownership to avoid memory leaks. The TStringProxy class is decorated with ctTypeString interceptor to just retrieve the plain string. If we omit that we get an inner object containing the value as a separate field instead of just the value for the current TDateTime field.
This is the complete class declaration for TUTCDateTimeInterceptor together with the missing methods:
type
TUTCDateTimeInterceptor = class(TJSONInterceptor)
private type
TStringProxyInterceptor = class(TJSONInterceptor)
public
function TypeStringConverter(Data: TObject): string; override;
end;
[JsonReflect(ctTypeString, rtTypeString, TStringProxyInterceptor)]
TStringProxy = class
private
FValue: string;
public
property Value: string read FValue write FValue;
end;
var
FStringProxy: TStringProxy;
function GetStringProxy: TStringProxy;
strict protected
function ToLocalTime(const ADateTime: TDateTime): TDateTime;
function ToUniversalTime(const ADateTime: TDateTime; const ForceDaylight: Boolean = False): TDateTime;
property StringProxy: TStringProxy read GetStringProxy;
public
destructor Destroy; override;
procedure StringReverter(Data: TObject; Field: string; Arg: string); override;
function ObjectConverter(Data: TObject; Field: string): TObject; override;
end;
function TUTCDateTimeInterceptor.GetStringProxy: TStringProxy;
begin
if FStringProxy = nil then begin
FStringProxy := TStringProxy.Create;
end;
Result := FStringProxy;
end;
destructor TUTCDateTimeInterceptor.Destroy;
begin
FStringProxy.Free;
inherited Destroy;
end;
How your team actually writes its code is very important if your organization is trying to build products in an Agile way. This post considers the risks associated with insufficiently Agile coding practices and explains how to identify them before they become major issues.
Nearly every software organization has high ambitions for increasing their agility and yours is probably one of them. It’s common to bring in experienced coaches to help teams operate with more agility and to discover ways of delivering more value. But, if you’re working on software products, you’ll find it’s hard to realize some of the benefits of Agile without changing the behaviour of developers in the codebase. Too many coaching efforts place little or no emphasis there.
Agile Coaching helps your business to become more successful by improving the way you plan and deliver software. Technical Agile Coaching specifically focuses on how people write code. How do you know if you need Technical Agile Coaching? Here are three signs.
You spend more time fixing bugs than building new features
Look at all the tasks you work on each week. Ask yourself how much time is being spent on innovative new features, and how much on fixing defects in existing features? For some teams the answer can be as little as 5% of time spent on proactive new work. Now ask yourself – how much more money would our organization make if we spent less time on fixes and more on delivering new features?
Turning this situation around is about increasing the quality of your work so that fewer fixes are needed, resulting in more time for innovation. Often, the root cause of defects is technical debt and code that is hard to understand and change.
You can mitigate against this by training your developers to build in quality from the start. Technical Agile Coaching helps teams to improve automated tests and reduce technical debt by demonstrating various techniques and practices for evolving design safely.
Delivery of new features is delayed, sometimes more than once
Are your new features around 80% done for long periods and then take forever to be actually ready for release? Often this is the biggest complaint from customers and other stakeholders. How much would it be worth to your organization for new functionality to be available reliably and on time?
Late deliveries are often caused by developers not collaborating effectively in the code. To build a new feature they need to bring the whole team’s expertise to bear, and integrate their work seamlessly. Typically, the coding work is not divided up in a way that makes collaboration easy, and changes are not integrated until late in the process.
Technical Agile Coaching helps developers to deliver on time: reliable automated regression tests promote better collaboration with a shared view of the current status; teams learn to divide up the coding work and integrate changes far more frequently than the interval between external deliveries; developers learn to use their tools better and to communicate effectively within the team so they can complete tasks more reliably.
Developers complain the code is so bad they need to re-write the whole thing
Sometimes developers ask for a hiatus in new feature development for several months, even a year, so they can start over by rebuilding the same functionality using new tools and frameworks. This is a significant cost to your business, and a large risk. A more staged approach carries far less risk and allows you to build better functionality with much more manageable cost.
Usually, an organization arrives in this situation after many years of accumulated technical debt. The technologies chosen originally become outdated and are not upgraded in a timely way. Developers make design decisions and then leave without fully documenting or otherwise transferring design knowledge to new team members. New features are added that were not anticipated in the original design. Gradually the code quality declines and it becomes harder and harder to work with.
Technical Agile Coaching helps developers gain better control over existing code, so they can continue working with it and building new features. Teams learn to migrate code to newer technologies in a series of smaller, less risky steps, adding new functionality as they go. Developers learn to pay off technical debt and to communicate better about design decisions.
Benefits of Technical Agile Coaching
Technical Agile Coaching helps developers to change the way they work, day to day, minute by minute, in their codebase. They learn to increase the quality of the code in their product and work more effectively as a team. The coach works closely with the developers while they are writing code to remind, reinforce, teach, and demonstrate Agile practices. With Technical Agile Coaching you should expect to see the following benefits:
Reductions in technical debt
Better automated regression tests
Improved communication between developers within the team
Smaller, safer changes integrated often and documented well
Improved effectiveness using team development tools and IDEs.
Technical Agile Coaching helps a team to improve how they write code, and begin to use more effective agile development practices together.
If you’re interested in getting some Technical Agile Coaching, please contact us at ProAgile. We have many coaches with a lot of experience in agile development and several with technical coaching expertise.
In this article, you will learn how you can simplify your callback or Promise based Node.js application with async functions (async/await).
Whether you’ve looked at async/await and promises in javascript before, but haven’t quite mastered them yet, or just need a refresher, this article aims to help you.
A note from the authors:
We re-released our number one article on the blog called "Mastering Async Await in Node.js" which has been read by more than 400.000 developers in the past 3 years.
This staggering 2000 word essay is usually the Nr. 1 result when you Google for Node.js Async/Await info, and for a good reason.
It's full of real-life use cases, code examples, and deep-diving explanations on how to get the most out of async/await. Since it's a re-release, we fully updated it with new code examples, as there are a lot of new Node.js features since the original release which you can take advantage of.
What are async functions in Node?
Async functions are available natively in Node and are denoted by the async keyword in their declaration. They always return a promise, even if you don’t explicitly write them to do so. Also, the await keyword is only available inside async functions at the moment - it cannot be used in the global scope.
In an async function, you can await any Promise or catch its rejection cause.
So if you had some logic implemented with promises:
Currently in Node you get a warning about unhandled promise rejections, so you don’t necessarily need to bother with creating a listener. However, it is recommended to crash your app in this case as when you don’t handle an error, your app is in an unknown state. This can be done either by using the --unhandled-rejections=strict CLI flag, or by implementing something like this:
Automatic process exit will be added in a future Node release - preparing your code ahead of time for this is not a lot of effort, but will mean that you don’t have to worry about it when you next wish to update versions.
Patterns with async functions
There are quite a couple of use cases when the ability to handle asynchronous operations as if they were synchronous comes very handy, as solving them with Promises or callbacks requires the use of complex patterns.
Since node@10.0.0, there is support for async iterators and the related for-await-of loop. These come in handy when the actual values we iterate over, and the end state of the iteration, are not known by the time the iterator method returns - mostly when working with streams. Aside from streams, there are not a lot of constructs that have the async iterator implemented natively, so we’ll cover them in another post.
Retry with exponential backoff
Implementing retry logic was pretty clumsy with Promises:
Not as hideous as the previous example, but if you have a case where 3 asynchronous functions depend on each other the following way, then you have to choose from several ugly solutions.
functionA returns a Promise, then functionB needs that value and functionC needs the resolved value of both functionA's and functionB's Promise.
With this solution, we get valueA from the surrounding closure of the 3rd then and valueB as the value the previous Promise resolves to. We cannot flatten out the Christmas tree as we would lose the closure and valueA would be unavailable for functionC.
Solution 2: Moving to a higher scope
function executeAsyncTask () {
let valueA
return functionA()
.then((v) => {
valueA = v
return functionB(valueA)
})
.then((valueB) => {
return functionC(valueA, valueB)
})
}
In the Christmas tree, we used a higher scope to make valueA available as well. This case works similarly, but now we created the variable valueA outside the scope of the .then-s, so we can assign the value of the first resolved Promise to it.
This one definitely works, flattens the .then chain and is semantically correct. However, it also opens up ways for new bugs in case the variable name valueA is used elsewhere in the function. We also need to use two names — valueA and v — for the same value.
There is no other reason for valueA to be passed on in an array together with the Promise functionB then to be able to flatten the tree. They might be of completely different types, so there is a high probability of them not belonging to an array at all.
You can, of course, write a helper function to hide away the context juggling, but it is quite difficult to read, and may not be straightforward to understand for those who are not well versed in functional magic.
By using async/await our problems are magically gone:
This is similar to the previous one. In case you want to execute several asynchronous tasks at once and then use their values at different places, you can do it easily with async/await:
As we've seen in the previous example, we would either need to move these values into a higher scope or create a non-semantic array to pass these values on.
Array iteration methods
You can use map, filter and reduce with async functions, although they behave pretty unintuitively. Try guessing what the following scripts will print to the console:
map
function asyncThing (value) {
return new Promise((resolve) => {
setTimeout(() => resolve(value), 100);
});
}
async function main () {
return [1,2,3,4].map(async (value) => {
const v = await asyncThing(value);
return v * 2;
});
}
main()
.then(v => console.log(v))
.catch(err => console.error(err));
filter
function asyncThing (value) {
return new Promise((resolve) => {
setTimeout(() => resolve(value), 100);
});
}
async function main () {
return [1,2,3,4].filter(async (value) => {
const v = await asyncThing(value);
return v % 2 === 0;
});
}
main()
.then(v => console.log(v))
.catch(err => console.error(err));
If you log the returned values of the iteratee with map you will see the array we expect: [ 2, 4, 6, 8 ]. The only problem is that each value is wrapped in a Promise by the AsyncFunction.
So if you want to get your values, you'll need to unwrap them by passing the returned array to a Promise.all:
Originally, you would first wait for all your promises to resolve and then map over the values:
function main () {
return Promise.all([1,2,3,4].map((value) => asyncThing(value)));
}
main()
.then(values => values.map((value) => value * 2))
.then(v => console.log(v))
.catch(err => console.error(err));
This seems a bit more simple, doesn’t it?
The async/await version can still be useful if you have some long running synchronous logic in your iteratee and another long-running async task.
This way you can start calculating as soon as you have the first value - you don't have to wait for all the Promises to be resolved to run your computations. Even though the results will still be wrapped in Promises, those are resolved a lot faster then if you did it the sequential way.
What about filter? Something is clearly wrong...
Well, you guessed it: even though the returned values are [ false, true, false, true ], they will be wrapped in promises, which are truthy, so you'll get back all the values from the original array. Unfortunately, all you can do to fix this is to resolve all the values and then filter them.
Reducing is pretty straightforward. Bear in mind though that you need to wrap the initial value into Promise.resolve, as the returned accumulator will be wrapped as well and has to be await-ed.
.. As it is pretty clearly intended to be used for imperative code styles.
To make your .then chains more "pure" looking, you can use Ramda's pipeP and composeP functions.
Rewriting callback-based Node.js applications
Async functions return a Promise by default, so you can rewrite any callback based function to use Promises, then await their resolution. You can use the util.promisify function in Node.js to turn callback-based functions to return a Promise-based ones.
Rewriting Promise-based applications
Simple .then chains can be upgraded in a pretty straightforward way, so you can move to using async/await right away.
If you liked the good old concepts of if-else conditionals and for/while loops,
if you believe that a try-catch block is the way errors are meant to be handled,
you will have a great time rewriting your services using async/await.
As we have seen, it can make several patterns a lot easier to code and read, so it is definitely more suitable in several cases than Promise.then() chains. However, if you are caught up in the functional programming craze of the past years, you might wanna pass on this language feature.
Are you already using async/await in production, or you plan on never touching it? Let's discuss it in the comments below.
This article was originally written by Tamas Kadlecsik and was released on 2017 July 5. The heavily revised second edition was authored by Janos Kubisch and Tamas Kadlecsik, and it was released on 2020 February 17.
Old? Really! My son is one year older and I still would rate him young. I for myself can remember the years between 25 and 30 as some of the best years in my life. It took almost 10 more years until my first encounter with Delphi, soon after its first release. (Now you can estimate my age )
At that time I had been working with Turbo Pascal for a couple of years, starting with TP 2, and I was able to make a decent living as an independent developer. While still investigating the usefulness of Turbo Pascal for Windows, the sudden availability of Delphi was a massive game changer.
Some years ago I was asked to port a long grown Delphi 5 application to the then current Delphi version. The developer in charge didn’t have the capacity to do that, so they looked for external help and, well, I got the gig. After the port was finished I am still occasionally working on that project when they need some help.
Once I became aware that the customer was planning to create a complete new version of that software written in C#. I was a bit miffed as they didn’t even bother to ask me doing that in Delphi. So I just asked if I may at least make an offer.
They were pretty skeptic about a Delphi solution:
But we don’t want the Windows look. We rather have some individual look, matching our corporate identity.
No problem. We can use a style made just for you.
But it has to work on a touch screen.
Delphi supports that out of the box.
But we need a mobile version, too.
So, what?
(some more objections – easily overruled)
They handed me some drafts of the different screens, so I could see what they were after. In a few days I was able to create a prototype with Delphi, that not only resembles their drafts pretty close, but also showed some meaningful actions when clicking buttons and switching tabs. Some cool controls from TMS helped a lot for the first impression.
Somehow I could figure the cost the other company estimated – a mid six digit number. My offer was about 10 – 20% of that. I also teased with having a working solution for the upcoming exhibition – something unthinkable by others.
Meanwhile the C# endeavor is long forgotten. There is still some skepticism about Delphi, no matter what results we show – seems to be some genetic thing.
In the movie Something’s Gotta Give the great Jack Nicholson describes women between 25 and 30 as when everything fits. Seems there are some great times ahead…
In this post, we cover what tools and techniques you have at your disposal when handling Node.js asynchronous operations: async.js, promises, and async functions.
After reading this article, you’ll know how to use the latest async tools at your disposal provided by Node.js!
Node.js at Scale is a collection of articles focusing on the needs of companies with bigger Node.js installations and advanced Node developers. Chapters:
If you have not read these articles, I highly recommend them as introductions!
The Problem with Node.js Async
Node.js itself is single-threaded, but some tasks can run in parallel thanks to its asynchronous nature.
But what does running in parallel mean in practice?
Since we program a single-threaded VM, it is essential that we do not block execution by waiting for I/O, but handle operations concurrently with the help of Node.js's event-driven APIs.
Let’s take a look at some fundamental patterns, and learn how we can write resource-efficient, non-blocking code, with the built-in solutions of Node.js.
The Classical Approach - Callbacks
Let's take a look at these simple async operations. They do nothing special, just fire a timer and call a function once the timer finished.
Our higher-order functions can be executed sequentially or in parallel with the basic "pattern" by nesting callbacks - but using this method can lead to an untameable callback-hell.
function runSequentially (callback) {
fastFunction((err, data) => {
if (err) return callback(err)
console.log(data) // results of a
slowFunction((err, data) => {
if (err) return callback(err)
console.log(data) // results of b
// here you can continue running more tasks
})
})
}
To become an efficient Node.js developer, you have to avoid the constantly growing indentation level, produce clean and readable code and be able to handle complex flows.
Let me show you some of the tools we can use to organize our code in a nice and maintainable way!
#1: Using Promises
There have been native promises in javascript since 2014, receiving an important boost in performance in Node.js 8. We will make use of them in our functions to make them non-blocking - without the traditional callbacks. The following example will call the modified version of both our previous functions in such a manner:
function fastFunction () {
return new Promise((resolve, reject) => {
setTimeout(function () {
console.log('Fast function done')
resolve()
}, 100)
})
}
function slowFunction () {
return new Promise((resolve, reject) => {
setTimeout(function () {
console.log('Slow function done')
resolve()
}, 300)
})
}
function asyncRunner () {
return Promise.all([slowFunction(), fastFunction()])
}
Please note that Promise.all will fail as soon as any of the promises inside it fails.
The previous functions have been modified slightly to return promises. Our new function, asyncRunner, will also return a promise, that will resolve when all the contained functions resolve, and this also means that wherever we call our asyncRunner, we'll be able to use the .then and .catch methods to deal with the possible outcomes:
asyncRunner()
.then(([ slowResult, fastResult ]) => {
console.log('All operations resolved successfully')
})
.catch((error) => {
console.error('There has been an error:', error)
})
Since node@12.9.0, there is a method called promise.allSettled, that we can use to get the result of all the passed in promises regardless of rejections. Much like Promise.all, this function expects an array of promises, and returns an array of objects that has a status of “fulfilled” or “rejected”, and either the resolved value or the error that occurred.
function failingFunction() {
return new Promise((resolve, reject) => {
reject(new Error('This operation will surely fail!'))
})
}
function asyncMixedRunner () {
return Promise.allSettled([slowFunction(), failingFunction()])
}
asyncMixedRunner()
.then(([slowResult, failedResult]) => {
console.log(slowResult, failedResult)
})
In previous node versions, where .allSettled is not available, we can implement our own version in just a few lines:
To make sure your tasks run in a specific order - maybe successive functions need the return value of previous ones, or depend on the run of previous functions less directly - which is basically the same as _.flow for functions that return a Promise. As long as it's missing from everyone's favorite utility library, you can easily create a chain from an array of your async functions:
function serial(asyncFunctions) {
return asyncFunctions.reduce(function(functionChain, nextFunction) {
return functionChain.then(
(previousResult) => nextFunction(previousResult)
);
}, Promise.resolve());
}
serial([parameterValidation, dbQuery, serviceCall ])
.then((result) => console.log(`Operation result: ${result}`))
.catch((error) => console.log(`There has been an error: ${error}`))
In case of a failure, this will skip all the remaining promises, and go straight to the error handling branch. You can tweak it some more in case you need the result of all of the promises regardless if they resolved or rejected.
Node also provides a handy utility function called "promisify", that you can use to convert any old function expecting a callback that you just have to use into one that returns a promise. All you need to do is import it in your project:
const promisify = require('util').promisify;
function slowCallbackFunction (done) {
setTimeout(function () {
done()
}, 300)
}
const slowPromise = promisify(slowCallbackFunction);
slowPromise()
.then(() => {
console.log('Slow function resolved')
})
.catch((error) => {
console.error('There has been an error:', error)
})
It's actually not that hard to implement a promisify function of our own, to learn more about how it works. We can even handle additional arguments that our wrapped functions might need!
function homebrewPromisify(originalFunction, originalArgs = []) {
return new Promise((resolve, reject) => {
originalFunction(...originalArgs, (error, result) => {
if (error) return reject(error)
return resolve(result)
})
})
}
We just wrap the original callback-based function in a promise, and then reject or resolve based on the result of the operation.
Easy as that!
For better support of callback based code - legacy code, ~50% of the npm modules - Node also includes a callbackify function, essentially the opposite of promisify, which takes an async function that returns a promise, and returns a function that expects a callback as its single argument.
const callbackify = require('util').callbackify
const callbackSlow = callbackify(slowFunction)
callbackSlow((error, result) => {
if (error) return console.log('Callback function received an error')
return console.log('Callback resolved without errors')
})
#2: Meet Async - aka how to write async code in 2020
We can use another javascript feature since node@7.6 to achieve the same thing: the async and await keywords. They allow you to structure your code in a way that is almost synchronous looking, saving us the .then chaining as well as callbacks:
This is the same async runner we've created before, but it does not require us to wrap our code in .then calls to gain access to the results. For handling errors, we have the option to use try & catch blocks, as presented above, or use the same .catch calls that we've seen previously with promises. This is possible because async-await is an abstraction on top of promises - async functions always return a promise, even if you don't explicitly declare them to do so.
The await keyword can only be used inside functions that have the async tag. This also means that we cannot currently utilize it in the global scope.
Since Node 10, we also have access to the promise.finally method, which allows us to run code regardless of whether the promise resolve or rejected. It can be used to run tasks that we had to call in both the .then and .catch paths previously, saving us some code duplication.
Using all of this in Practice
As we have just learned several tools and tricks to handle async, it is time to do some practice with fundamental control flows to make our code more efficient and clean.
Let’s take an example and write a route handler for our web app, where the request can be resolved after 3 steps: validateParams, dbQuery and serviceCall.
If you'd like to write them without any helper, you'd most probably end up with something like this. Not so nice, right?
// validateParams, dbQuery, serviceCall are higher-order functions
// DONT
function handler (done) {
validateParams((err) => {
if (err) return done(err)
dbQuery((err, dbResults) => {
if (err) return done(err)
serviceCall((err, serviceResults) => {
done(err, { dbResults, serviceResults })
})
})
})
}
Instead of the callback-hell, we can use promises to refactor our code, as we have already learned:
// validateParams, dbQuery, serviceCall are higher-order functions
function handler () {
return validateParams()
.then(dbQuery)
.then(serviceCall)
.then((result) => {
console.log(result)
return result
})
.catch(console.log.bind(console))
}
Let's take it a step further! Rewrite it to use the async and await keywords:
It feels like a "synchronous" code but still doing async operations one after each other.
Essentially, a new callback is injected into the functions, and this is how async knows when a function is finished.
Takeaway rules for Node.js & Async
Fortunately, Node.js eliminates the complexities of writing thread-safe code. You just have to stick to these rules to keep things smooth:
As a rule of thumb, prefer async, because using a non-blocking approach gives superior performance over the synchronous scenario, and the async - await keywords gives you more flexibility in structuring your code. Luckily, most libraries now have promise based APIs, so compatibility is rarely an issue, and can be solved with util.promisify should the need arise.
If you have any questions or suggestions for the article, please let me know in the comments!
This article was originally written by Tamas Hodi, and was released on 2017, January 17. The revised second edition was authored by Janos Kubisch and Tamas Hodi and it was released on 2020 February 10.
In this article, I’m going to show how you can quickly generate a static site with Hugo and Netlify in an easy way.
What are static site generators, and why do you need one?
Simply put, a static site generator takes your content, applies it to a template, and generates an HTML based static site. It’s excellent for blogs and landing pages.
Benefits:
Quick deployment
Secure (no dynamic content)
Fast load times
Simple usage
Version control
So, what are the popular options in terms of static site generators?
Gatsby (React/JS)
Hugo (Go)
Next.js (React/JS)
Jekyll (Ruby)
Gridsome (Vue/JS)
These are the most starred projects on GitHub. I've read about Hugo previously, and it seemed fun to try out, so I’m going to stick with Hugo.
What is Hugo?
The official website states that Hugo is the world's fastest static website engine.
We can confirm that it’s really fast. Hugo is written in Golang. It also comes with a rich theming system and aims to make building websites fun again.
Let’s see what we got here.
Installing Hugo
Mac:
brew install hugo
Linux:
sudo apt-get install hugo
or
sudo pacman -Syu hugo
To verify your install:
hugo version
Using Hugo
Create a new project:
hugo new site my-project
Add a theme for a quick start. You can find themes here.
cd my-project
git init
git submodule add https://github.com/budparr/gohugo-theme-ananke.git themes/ananke
Add the theme to the config file.
echo 'theme = "ananke"' >> config.toml
Add some content.
hugo new posts/my-first-post.md
It should look something like this:
---
title: "My First Post"
date: 2020-01-05T18:37:11+01:00
draft: true
---
Hello World!
There are lots of options (tags, description, categories, author) you can write to the front matter details.
.
├── archetypes
├── assets (not created by default)
├── config.toml
├── content
├── data
├── layouts
├── static
└── themes
archetypes: Archetypes are content template files that contain preconfigured front matter (date, title, draft). You can create new archetypes with custom preconfigured front matter fields.
assets: Assets folder stores all the files, which are processed by Hugo Pipes. (e.g. CSS/Sass files) This directory is not created by default.
config.toml: Hugo uses the config.toml, config.yaml, or config.json (if found in the site root) as the default site config file. Instead of a single config file, you can use a config directory as well to separate different environments..
content: This is where all the content files live. Top level folders count as content sections. If you have devops and nodejs sections, then you will have content/devops/first-post.md and content/nodejs/second-post.md directories.
data: This directory is used to store configuration files that can be used by Hugo when generating your website.
layouts: Stores templates in the form of .html files. See the Styling section for more information.
static: Stores all the static content: images, CSS, JavaScript, etc. When Hugo builds your site, all assets inside your static directory are copied over as-is.
themes: Hugo theme of your choice.
Styling our static site
Remember, we applied a theme before. Now, if we inspect the themes folder, we can see the styling files.
But beware!
DO NOT EDIT THESE FILES DIRECTLY.
Instead, we will mirror the theme directory structure to the root layouts folder.
Let's say I want to apply custom CSS to the theme.
The theme has a themes/theme-name/layouts/partials folder, where we can find some HTML templates (header.html, footer.html). Now we will edit the header.html template, so copy the content from this file to layouts/partials/header.html and be careful to create the same directory structure like the theme's into the root layouts folder.
Create a custom CSS file: static/css/custom-style.css.
Add the custom css file to config.toml:
[params]
custom_css = ["css/custom-style.css"]
Open layouts/partials/header.html:
Add this code inside the <head> tag:
{{ range .Site.Params.custom_css -}}
<link rel="stylesheet" href="{{ . | absURL }}">
{{- end }}
Now you can overwrite CSS classes applied by your theme.
Deploying our static site to Netlify
One of the benefits of a static site is that you can deploy it easily. Netlify or AWS S3 is a very good choice for hosting a static site. Let’s see how to deploy it to Netlify.
Requirements:
Netlify account
Github repository
What to do on Netlify
Create a git repository
Create a netlify.toml file into the root of your project with the content below.
Seems Delphi 10.3.3 Rio adds some bogus entries for Android 64 into the dproj files when opening a project from a previous version. As this undermines the normalizing algorithm in Project Magician I added some code to clean up that mess first.